Abstract
In the era of large language models (LLMs), building multilingual large language models (MLLMs) that can serve users worldwide holds great significance. However, existing research seldom focuses on the truthfulness of MLLMs. Meanwhile, contemporary multilingual aligning technologies struggle to balance massive languages and often exhibit serious truthfulness gaps across different languages, especially those that differ greatly from English. In our work, we construct a benchmark for truthfulness evaluation in multilingual scenarios and explore the ways to align facts across languages to enhance the truthfulness of MLLMs. Furthermore, we propose Fact-aware Multilingual Selective Synergy (FaMSS) to optimize the data allocation across a large number of languages and different data types. Experimental results demonstrate that our approach can effectively reduce the multilingual representation disparity and enhance the multilingual capabilities of LLMs.
Abstract (translated)
在大型语言模型(LLMs)的时代,构建可以服务全球用户的多语言大型语言模型(MLLMs)具有重大意义。然而,现有研究很少关注MLLMs的真实性。与此同时,当代多语言对齐技术很难平衡大规模语言,并在不同语言之间表现出严重的事实性差距,尤其是与英语差异极大的语言。在我们的工作中,我们为多语言场景中的真实性评估树立了基准,并探讨了如何跨越语言将事实对齐以提高MLLMs的真实性。此外,我们提出了FaMSS,用于优化大量语言和不同数据类型之间的数据分配。实验结果表明,我们的方法可以有效地减少多语言表示差异,提高LLMs的多语言能力。
URL
https://arxiv.org/abs/2406.14434