Abstract
Large Language Models (LLMs) are becoming crucial across various fields, emphasizing the urgency for high-quality models in underrepresented languages. This study explores the unique challenges faced by low-resource languages, such as data scarcity, model selection, evaluation, and computational limitations, with a special focus on Turkish. We conduct an in-depth analysis to evaluate the impact of training strategies, model choices, and data availability on the performance of LLMs designed for underrepresented languages. Our approach includes two methodologies: (i) adapting existing LLMs originally pretrained in English to understand Turkish, and (ii) developing a model from the ground up using Turkish pretraining data, both supplemented with supervised fine-tuning on a novel Turkish instruction-tuning dataset aimed at enhancing reasoning capabilities. The relative performance of these methods is evaluated through the creation of a new leaderboard for Turkish LLMs, featuring benchmarks that assess different reasoning and knowledge skills. Furthermore, we conducted experiments on data and model scaling, both during pretraining and fine-tuning, simultaneously emphasizing the capacity for knowledge transfer across languages and addressing the challenges of catastrophic forgetting encountered during fine-tuning on a different language. Our goal is to offer a detailed guide for advancing the LLM framework in low-resource linguistic contexts, thereby making natural language processing (NLP) benefits more globally accessible.
Abstract (translated)
大语言模型(LLMs)在各个领域都变得至关重要,突出了在少数代表语言中高质量模型的紧迫性。这项研究探讨了低资源语言所面临独特的挑战,例如数据稀缺性、模型选择、评估和计算限制,特别关注土耳其。我们深入分析了训练策略、模型选择和数据可用性对为少数代表语言设计的LLM的性能影响。我们的方法包括两种:(i)将原始在英语上预训练的LLM适配到土耳其语,以了解土耳其语;(ii)使用土耳其预训练数据从头构建模型,并在旨在增强推理能力的全新土耳其指令微调数据上进行监督微调。我们通过创建一个新的土耳其LLM领导者板来评估这些方法的表现,该板包括评估不同推理和知识技能的基准。此外,我们还进行了在数据和模型缩放的同时进行的实验,强调知识在语言之间的传递以及在不同语言上进行微调时遇到的挑战。我们的目标是为在低资源语言环境中推动LLM框架的发展提供详细指南,从而使自然语言处理(NLP)的益处在全球范围内更加易于获取。
URL
https://arxiv.org/abs/2405.04685