Title: |
AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information? |
Authors: |
Gong, Kaixiong, Feng, Kaituo, Li, Bohao, Wang, Yibing, Cheng, Mofan, Yang, Shijia, Han, Jiaming, Wang, Benyou, Bai, Yutong, Yang, Zhuoran, Yue, Xiangyu |
Publication Year: |
2024 |
Collection: |
Computer Science |
Subject Terms: |
Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Multimedia, Computer Science - Sound, Electrical Engineering and Systems Science - Audio and Speech Processing |
More Details: |
Recently, multimodal large language models (MLLMs), such as GPT-4o, Gemini 1.5 Pro, and Reka Core, have expanded their capabilities to include vision and audio modalities. While these models demonstrate impressive performance across a wide range of audio-visual applications, our proposed DeafTest reveals that MLLMs often struggle with simple tasks humans find trivial: 1) determining which of two sounds is louder, and 2) determining which of two sounds has a higher pitch. Motivated by these observations, we introduce AV-Odyssey Bench, a comprehensive audio-visual benchmark designed to assess whether those MLLMs can truly understand the audio-visual information. This benchmark encompasses 4,555 carefully crafted problems, each incorporating text, visual, and audio components. To successfully infer answers, models must effectively leverage clues from both visual and audio inputs. To ensure precise and objective evaluation of MLLM responses, we have structured the questions as multiple-choice, eliminating the need for human evaluation or LLM-assisted assessment. We benchmark a series of closed-source and open-source models and summarize the observations. By revealing the limitations of current models, we aim to provide useful insight for future dataset collection and model development. Comment: Project page: https://av-odyssey.github.io/ |
Document Type: |
Working Paper |
Access URL: |
http://arxiv.org/abs/2412.02611 |
Accession Number: |
edsarx.2412.02611 |
Database: |
arXiv |