Abstract
Recently, there have been significant advancements in large language models (LLMs), particularly focused on the English language. These advancements have enabled these LLMs to understand and execute complex instructions with unprecedented accuracy and fluency. However, despite these advancements, there remains a noticeable gap in the development of Chinese instruction tuning. The unique linguistic features and cultural depth of the Chinese language pose challenges for instruction tuning tasks. Existing datasets are either derived from English-centric LLMs or are ill-suited for aligning with the interaction patterns of real-world Chinese users. To bridge this gap, we introduce COIG-CQIA, a high-quality Chinese instruction tuning dataset. Our aim is to build a diverse, wide-ranging instruction-tuning dataset to better align model behavior with human interactions. To this end, we collect a high-quality human-written corpus from various sources on the Chinese Internet, including Q&A communities, Wikis, examinations, and existing NLP datasets. This corpus was rigorously filtered and carefully processed to form the COIG-CQIA dataset. Furthermore, we train models of various scales on different subsets of CQIA, following in-depth evaluation and analyses. The findings from our experiments offer valuable insights for selecting and developing Chinese instruction-tuning datasets. We also find that models trained on CQIA-Subset achieve competitive results in human assessment as well as knowledge and security benchmarks. Data are available at this https URL
Abstract (translated)
近年来,在大型语言模型(LLMs)方面取得了显著的进展,特别是针对英语。这些进步使得这些LLM能够以前所未有的准确性和流畅性理解和执行复杂的指令。然而,尽管取得了这些进步,汉语指令调度的开发仍然存在明显的差距。汉语的语言特性和文化深度使得指令调定任务具有挑战性。现有的数据要么来自以英语为模型的LLM,要么不适合与现实世界中国用户的交互模式对齐。为了弥合这个差距,我们引入了COIG-CQIA,一个高质量的中文指令调定数据集。我们的目标是建立一个多样、广泛的指令调定数据集,更好地将模型行为与人类交互对齐。为此,我们从各种来源收集了高质量的人类写作语料,包括问答社区、维基百科、考试和现有的自然语言处理数据集。这个语料库经过严格筛选和精心处理,形成了COIG-CQIA数据集。此外,我们在CQIA的不同子集上训练了各种规模的模型,并进行了深入评估和分析。我们实验的结果为我们选择和开发中文指令调定数据集提供了宝贵的见解。我们还发现,在CQIA子集上训练的模型在人类评估和知识与安全基准测试中都取得了竞争力的结果。数据可在此处下载:https://www.aclweb.org/anthology/N22-11965
URL
https://arxiv.org/abs/2403.18058