Abstract
Language Models (LMs) such as BERT, have been shown to perform well on the task of identifying Named Entities (NE) in text. A BERT LM is typically used as a classifier to classify individual tokens in the input text, or to classify spans of tokens, as belonging to one of a set of possible NE categories. In this paper, we hypothesise that decoder-only Large Language Models (LLMs) can also be used generatively to extract both the NE, as well as potentially recover the correct surface form of the NE, where any spelling errors that were present in the input text get automatically corrected. We fine-tune two BERT LMs as baselines, as well as eight open-source LLMs, on the task of producing NEs from text that was obtained by applying Optical Character Recognition (OCR) to images of Japanese shop receipts; in this work, we do not attempt to find or evaluate the location of NEs in the text. We show that the best fine-tuned LLM performs as well as, or slightly better than, the best fine-tuned BERT LM, although the differences are not significant. However, the best LLM is also shown to correct OCR errors in some cases, as initially hypothesised.
Abstract (translated)
已经证明,像BERT这样的语言模型在识别命名实体(NE)方面表现良好。通常,BERT LM被用作分类器,将输入文本中的单个词分类为属于一系列可能NE类别的 span。在本文中,我们假设可以使用 decoder-only Large Language Models(LLMs)进行生成,既可以提取NE,又可以可能恢复NE的正确表面形式,其中输入文本中存在的任何拼写错误都会自动更正。我们对两个BERT LM和八个开源LLM进行了微调,作为 baseline,并在对日本商店收据进行光学字符识别(OCR)后生成NE的任务上进行了训练。在这项工作中,我们没有试图在文本中寻找或评估NE的位置。我们证明了最佳微调的LLM的表现与最佳微调的BERT LM相当,或者略有更好;尽管这些差异并不显著。然而,最佳LLM也被发现可以在某些情况下纠正OCR错误,正如最初假设的那样。
URL
https://arxiv.org/abs/2403.00528