Abstract
This study introduces an approach to Estonian text simplification using two model architectures: a neural machine translation model and a fine-tuned large language model (LLaMA). Given the limited resources for Estonian, we developed a new dataset, the Estonian Simplification Dataset, combining translated data and GPT-4.0-generated simplifications. We benchmarked OpenNMT, a neural machine translation model that frames text simplification as a translation task, and fine-tuned the LLaMA model on our dataset to tailor it specifically for Estonian simplification. Manual evaluations on the test set show that the LLaMA model consistently outperforms OpenNMT in readability, grammaticality, and meaning preservation. These findings underscore the potential of large language models for low-resource languages and provide a basis for further research in Estonian text simplification.
Abstract (translated)
这项研究介绍了一种使用两种模型架构对爱沙尼亚语文本进行简化的方法:神经机器翻译模型和经过微调的大规模语言模型(LLaMA)。鉴于爱沙尼亚语资源有限,我们开发了一个新的数据集——爱沙尼亚文本简化数据集,该数据集结合了翻译数据和GPT-4.0生成的简化文本。我们将OpenNMT作为基准,这是一种将文本简化视为翻译任务的神经机器翻译模型,并在我们的数据集上对LLaMA模型进行微调,使其专门适用于爱沙尼亚语文本简化。手动评估测试集结果显示,与OpenNMT相比,LLaMA模型在可读性、语法正确性和意义保持方面始终表现更优。这些发现强调了大规模语言模型在低资源语言中的潜力,并为进一步研究爱沙尼亚语文本简化提供了基础。
URL
https://arxiv.org/abs/2501.15624