CALM: Unleashing the Cross-Lingual Self-Aligning Ability of Language Model Question Answering

Bibliographic Details
Title: CALM: Unleashing the Cross-Lingual Self-Aligning Ability of Language Model Question Answering
Authors: Wang, Yumeng, Fan, Zhiyuan, Wang, Qingyun, Fung, May, Ji, Heng
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Computation and Language
More Details: Large Language Models (LLMs) are pretrained on extensive multilingual corpora to acquire both language-specific cultural knowledge and general knowledge. Ideally, while LLMs should provide consistent responses to culture-independent questions across languages, we observe significant performance disparities. To address this, we explore the Cross-Lingual Self-Aligning ability of Language Models (CALM) to align knowledge across languages. Specifically, for a given question, we sample multiple responses across different languages and select the most self-consistent response as the target, leaving the remaining responses as negative examples. We then employ direct preference optimization (DPO) to align the model's knowledge across different languages. Evaluations on the MEDQA and X-CSQA datasets demonstrate CALM's effectiveness in enhancing cross-lingual knowledge question answering, both in zero-shot and retrieval-augmented settings. We also found that increasing the number of languages involved in CALM training leads to higher accuracy and consistency. We offer a qualitative analysis of how cross-lingual consistency can enhance knowledge alignment and explore the method's generalizability.
Comment: Accepted by NAACL 2025
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2501.18457
Accession Number: edsarx.2501.18457
Database: arXiv
More Details
Description not available.