Multi-LLM Collaborative Caption Generation in Scientific Documents

Bibliographic Details
Title: Multi-LLM Collaborative Caption Generation in Scientific Documents
Authors: Kim, Jaeyoung, Lee, Jongho, Choi, Hong-Jun, Hsu, Ting-Yao, Huang, Chieh-Yang, Kim, Sungchul, Rossi, Ryan, Yu, Tong, Giles, Clyde Lee, Huang, Ting-Hao 'Kenneth', Choi, Sungchul
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Computation and Language, Computer Science - Computer Vision and Pattern Recognition
More Details: Scientific figure captioning is a complex task that requires generating contextually appropriate descriptions of visual content. However, existing methods often fall short by utilizing incomplete information, treating the task solely as either an image-to-text or text summarization problem. This limitation hinders the generation of high-quality captions that fully capture the necessary details. Moreover, existing data sourced from arXiv papers contain low-quality captions, posing significant challenges for training large language models (LLMs). In this paper, we introduce a framework called Multi-LLM Collaborative Figure Caption Generation (MLBCAP) to address these challenges by leveraging specialized LLMs for distinct sub-tasks. Our approach unfolds in three key modules: (Quality Assessment) We utilize multimodal LLMs to assess the quality of training data, enabling the filtration of low-quality captions. (Diverse Caption Generation) We then employ a strategy of fine-tuning/prompting multiple LLMs on the captioning task to generate candidate captions. (Judgment) Lastly, we prompt a prominent LLM to select the highest quality caption from the candidates, followed by refining any remaining inaccuracies. Human evaluations demonstrate that informative captions produced by our approach rank better than human-written captions, highlighting its effectiveness. Our code is available at https://github.com/teamreboott/MLBCAP
Comment: Accepted to AAAI 2025 AI4Research Workshop
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2501.02552
Accession Number: edsarx.2501.02552
Database: arXiv
More Details
Description not available.