On the Compositional Generalization of Multimodal LLMs for Medical Imaging

Bibliographic Details
Title: On the Compositional Generalization of Multimodal LLMs for Medical Imaging
Authors: Cai, Zhenyang, Chen, Junying, Wang, Rongsheng, Wang, Weihong, Deng, Yonglin, Song, Dingjie, Chen, Yize, Zhang, Zixu, Wang, Benyou
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Machine Learning
More Details: Multimodal large language models (MLLMs) hold significant potential in the medical field, but their capabilities are often limited by insufficient data in certain medical domains, highlighting the need for understanding what kinds of images can be used by MLLMs for generalization. Current research suggests that multi-task training outperforms single-task as different tasks can benefit each other, but they often overlook the internal relationships within these tasks, providing limited guidance on selecting datasets to enhance specific tasks. To analyze this phenomenon, we attempted to employ compositional generalization (CG)-the ability of models to understand novel combinations by recombining learned elements-as a guiding framework. Since medical images can be precisely defined by Modality, Anatomical area, and Task, naturally providing an environment for exploring CG. Therefore, we assembled 106 medical datasets to create Med-MAT for comprehensive experiments. The experiments confirmed that MLLMs can use CG to understand unseen medical images and identified CG as one of the main drivers of the generalization observed in multi-task training. Additionally, further studies demonstrated that CG effectively supports datasets with limited data and delivers consistent performance across different backbones, highlighting its versatility and broad applicability. Med-MAT is publicly available at https://github.com/FreedomIntelligence/Med-MAT.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2412.20070
Accession Number: edsarx.2412.20070
Database: arXiv
More Details
Description not available.