Abstract
In this report, we present ChuXin, an entirely open-source language model with a size of 1.6 billion parameters. Unlike the majority of works that only open-sourced the model weights and architecture, we have made everything needed to train a model available, including the training data, the training process, and the evaluation code. Our goal is to empower and strengthen the open research community, fostering transparency and enabling a new wave of innovation in the field of language modeling. Furthermore, we extend the context length to 1M tokens through lightweight continual pretraining and demonstrate strong needle-in-a-haystack retrieval performance. The weights for both models are available at Hugging Face to download and use.
Abstract (translated)
在这份报告中,我们介绍了ChuXin,一个具有160亿参数的完全开源语言模型。与大多数只开源了模型权重和架构的大多数工作不同,我们为了让训练模型变得完整,提供了训练数据、训练过程和评估代码的一切。我们的目标是赋能和加强开放研究社区,促进透明度,并在语言建模领域推动创新的全新波浪。此外,我们通过轻量级持续预训练将上下文长度扩展到10万 tokens,并展示了强大的针山竹检索性能。两个模型的权重都可以在Hugging Face上下载和使用。
URL
https://arxiv.org/abs/2405.04828