A Survey on Mechanistic Interpretability for Multi-Modal Foundation Models

Bibliographic Details
Title: A Survey on Mechanistic Interpretability for Multi-Modal Foundation Models
Authors: Lin, Zihao, Basu, Samyadeep, Beigi, Mohammad, Manjunatha, Varun, Rossi, Ryan A., Wang, Zichao, Zhou, Yufan, Balasubramanian, Sriram, Zarei, Arman, Rezaei, Keivan, Shen, Ying, Yao, Barry Menglong, Xu, Zhiyang, Liu, Qin, Zhang, Yuxiang, Sun, Yan, Liu, Shilong, Shen, Li, Li, Hongxuan, Feizi, Soheil, Huang, Lifu
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Machine Learning, Computer Science - Artificial Intelligence
More Details: The rise of foundation models has transformed machine learning research, prompting efforts to uncover their inner workings and develop more efficient and reliable applications for better control. While significant progress has been made in interpreting Large Language Models (LLMs), multimodal foundation models (MMFMs) - such as contrastive vision-language models, generative vision-language models, and text-to-image models - pose unique interpretability challenges beyond unimodal frameworks. Despite initial studies, a substantial gap remains between the interpretability of LLMs and MMFMs. This survey explores two key aspects: (1) the adaptation of LLM interpretability methods to multimodal models and (2) understanding the mechanistic differences between unimodal language models and crossmodal systems. By systematically reviewing current MMFM analysis techniques, we propose a structured taxonomy of interpretability methods, compare insights across unimodal and multimodal architectures, and highlight critical research gaps.
Comment: 30 pages, 4 Figures, 10 Tables
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2502.17516
Accession Number: edsarx.2502.17516
Database: arXiv
More Details
Description not available.