Abstract
Aligning language models (LMs) based on human-annotated preference data is a crucial step in obtaining practical and performant LM-based systems. However, multilingual human preference data are difficult to obtain at scale, making it challenging to extend this framework to diverse languages. In this work, we evaluate a simple approach for zero-shot cross-lingual alignment, where a reward model is trained on preference data in one source language and directly applied to other target languages. On summarization and open-ended dialog generation, we show that this method is consistently successful under comprehensive evaluation settings, including human evaluation: cross-lingually aligned models are preferred by humans over unaligned models on up to >70% of evaluation instances. We moreover find that a different-language reward model sometimes yields better aligned models than a same-language reward model. We also identify best practices when there is no language-specific data for even supervised finetuning, another component in alignment.
Abstract (translated)
将基于人类标注偏好数据的语言模型对齐作为获得实际且高性能的语言模型系统的关键步骤。然而,在规模上获得多语言人类偏好数据是困难的,这使得将此框架扩展到各种语言具有挑战性。在这项工作中,我们评估了一种简单的零散跨语言对齐方法,其中在一种源语言的偏好数据上训练了一个奖励模型,并直接应用于其他目标语言。在概述和开放性对话生成方面,我们发现,在综合评估设置中,这种方法在包括人类评估的广泛评估实例中始终成功地实现了卓越表现:跨语言对齐的模型在超过70%的评估实例中优于未对齐的模型。此外,我们还发现,当没有语言特定的数据进行甚至监督微调时,不同语言的奖励模型有时会生成更好的对齐模型。我们也在没有语言特定数据进行监督微调时,识别出最佳实践。
URL
https://arxiv.org/abs/2404.12318