Bibliographic Details
Title: |
Dialogue is Better Than Monologue: Instructing Medical LLMs via Strategical Conversations |
Authors: |
Liu, Zijie, Zhao, Xinyu, Peng, Jie, Zhu, Zhuangdi, Chen, Qingyu, Hu, Xia, Chen, Tianlong |
Publication Year: |
2025 |
Collection: |
Computer Science |
Subject Terms: |
Computer Science - Computation and Language, Computer Science - Artificial Intelligence |
More Details: |
Current medical AI systems often fail to replicate real-world clinical reasoning, as they are predominantly trained and evaluated on static text and question-answer tasks. These tuning methods and benchmarks overlook critical aspects like evidence-based reasoning and handling distracting information. To bridge this gap, we introduce a novel benchmark that simulates real-world diagnostic scenarios, integrating noise and difficulty levels aligned with USMLE standards. Moreover, we explore dialogue-based fine-tuning, which transforms static datasets into conversational formats to better capture iterative reasoning processes. Experiments show that dialogue-tuned models outperform traditional methods, with improvements of $9.64\%$ in multi-round reasoning scenarios and $6.18\%$ in accuracy in a noisy environment. Our findings highlight dialogue tuning as a promising approach for advancing clinically aligned and robust medical AI systems. |
Document Type: |
Working Paper |
Access URL: |
http://arxiv.org/abs/2501.17860 |
Accession Number: |
edsarx.2501.17860 |
Database: |
arXiv |