SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning

Bibliographic Details
Title: SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning
Authors: Ma, Zexiong, Peng, Chao, Gao, Pengfei, Meng, Xiangxin, Zou, Yanzhen, Xie, Bing
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Software Engineering, Computer Science - Artificial Intelligence, Computer Science - Computation and Language
More Details: Mainstream issue-resolving frameworks predominantly rely on commercial models, leading to high costs and privacy concerns. Existing training approaches for issue resolving struggle with poor generalization and fail to fully leverage open-source development resources. We propose Subtask-oriented Reinforced Fine-Tuning (SoRFT), a novel training approach to enhance the issue resolving capability of LLMs. We decomposes issue resolving into structured subtasks: file localization, function localization, line localization, and code edit generation. SoRFT consists of two training stages: (1) rejection-sampled supervised fine-tuning, Chain of Thought (CoT) data is filtered using ground-truth before fine-tuning the LLM, and (2) rule-based reinforcement learning, which leverages PPO with ground-truth based rewards. We evaluate the SoRFT-trained model on SWE-Bench Verified and SWE-Bench Lite, achieving state-of-the-art (SOTA) performance among open-source models (e.g., resolve 21.4% issues on SWE-Bench Verified with SoRFT-Qwen-7B). The experimental results demonstrate that SoRFT significantly enhances issue-resolving performance, improves model generalization, and provides a cost-efficient alternative to commercial models.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2502.20127
Accession Number: edsarx.2502.20127
Database: arXiv
More Details
Description not available.