Abstract
Chinese Spell Checking (CSC) is a widely used technology, which plays a vital role in speech to text (STT) and optical character recognition (OCR). Most of the existing CSC approaches relying on BERT architecture achieve excellent performance. However, limited by the scale of the foundation model, BERT-based method does not work well in few-shot scenarios, showing certain limitations in practical applications. In this paper, we explore using an in-context learning method named RS-LLM (Rich Semantic based LLMs) to introduce large language models (LLMs) as the foundation model. Besides, we study the impact of introducing various Chinese rich semantic information in our framework. We found that by introducing a small number of specific Chinese rich semantic structures, LLMs achieve better performance than the BERT-based model on few-shot CSC task. Furthermore, we conduct experiments on multiple datasets, and the experimental results verified the superiority of our proposed framework.
Abstract (translated)
中文 Spell Checking(CSC)是一种广泛使用的技术,对语音到文本(STT)和光学字符识别(OCR)起着关键作用。大多数现有的 CSC 方法都依赖 BERT 架构,实现出色的性能。然而,由于基础模型规模有限,基于 BERT 的方法在少样本场景下表现不佳,在实际应用中存在局限性。在本文中,我们探讨了使用名为 RS-LLM(基于丰富语义的 LLM)的上下文学习方法来引入大型语言模型(LLMs)作为基础模型。此外,我们研究了在我们的框架中引入各种中文丰富语义信息的影响。我们发现,通过引入少量特定的中文丰富语义结构,LLMs 在少样本 CSC 任务上比基于 BERT 的模型实现更好的性能。此外,我们对多个数据集进行了实验,并验证了我们提出框架的优越性。
URL
https://arxiv.org/abs/2403.08492