Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning

Bibliographic Details
Title: Med-RLVR: Emerging Medical Reasoning from a 3B base model via reinforcement Learning
Authors: Zhang, Sheng, Liu, Qianchu, Qin, Guanghui, Naumann, Tristan, Poon, Hoifung
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
More Details: Reinforcement learning from verifiable rewards (RLVR) has recently gained attention for its ability to elicit self-evolved reasoning capabilitie from base language models without explicit reasoning supervisions, as demonstrated by DeepSeek-R1. While prior work on RLVR has primarily focused on mathematical and coding domains, its applicability to other tasks and domains remains unexplored. In this work, we investigate whether medical reasoning can emerge from RLVR. We introduce Med-RLVR as an initial study of RLVR in the medical domain leveraging medical multiple-choice question answering (MCQA) data as verifiable labels. Our results demonstrate that RLVR is not only effective for math and coding but also extends successfully to medical question answering. Notably, Med-RLVR achieves performance comparable to traditional supervised fine-tuning (SFT) on in-distribution tasks while significantly improving out-of-distribution generalization, with an 8-point accuracy gain. Further analysis of training dynamics reveals that, with no explicit reasoning supervision, reasoning emerges from the 3B-parameter base model. These findings underscore the potential of RLVR in domains beyond math and coding, opening new avenues for its application in knowledge-intensive fields such as medicine.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2502.19655
Accession Number: edsarx.2502.19655
Database: arXiv
More Details
Description not available.