Abstract
Large language models (LLMs) are now widely used in various fields, including finance. However, Japanese financial-specific LLMs have not been proposed yet. Hence, this study aims to construct a Japanese financial-specific LLM through continual pre-training. Before tuning, we constructed Japanese financial-focused datasets for continual pre-training. As a base model, we employed a Japanese LLM that achieved state-of-the-art performance on Japanese financial benchmarks among the 10-billion-class parameter models. After continual pre-training using the datasets and the base model, the tuned model performed better than the original model on the Japanese financial benchmarks. Moreover, the outputs comparison results reveal that the tuned model's outputs tend to be better than the original model's outputs in terms of the quality and length of the answers. These findings indicate that domain-specific continual pre-training is also effective for LLMs. The tuned model is publicly available on Hugging Face.
Abstract (translated)
大语言模型(LLMs)现在已被广泛应用于各个领域,包括金融领域。然而,尚未提出针对日本金融领域的LLM。因此,本研究旨在通过持续预训练来构建一个日本金融领域的LLM。在进行调整之前,我们为持续预训练构建了日本金融领域的数据集。作为基础模型,我们采用了在10亿级参数模型中实现最先进性能的日本LLM。通过使用数据集和基础模型进行持续预训练后,调整后的模型在 Japanese financial benchmarks 上的表现优于原始模型。此外,输出比较结果表明,调整模型的输出在答案的质量和长度方面优于原始模型。这些发现表明,对于LLMs,领域特定的持续预训练同样有效。已经调整好的模型在Hugging Face上公开发布。
URL
https://arxiv.org/abs/2404.10555