Paper Reading AI Learner

Large Language Models for Simultaneous Named Entity Extraction and Spelling Correction

2024-03-01 13:36:04
Edward Whittaker, Ikuo Kitagishi

Abstract

Language Models (LMs) such as BERT, have been shown to perform well on the task of identifying Named Entities (NE) in text. A BERT LM is typically used as a classifier to classify individual tokens in the input text, or to classify spans of tokens, as belonging to one of a set of possible NE categories. In this paper, we hypothesise that decoder-only Large Language Models (LLMs) can also be used generatively to extract both the NE, as well as potentially recover the correct surface form of the NE, where any spelling errors that were present in the input text get automatically corrected. We fine-tune two BERT LMs as baselines, as well as eight open-source LLMs, on the task of producing NEs from text that was obtained by applying Optical Character Recognition (OCR) to images of Japanese shop receipts; in this work, we do not attempt to find or evaluate the location of NEs in the text. We show that the best fine-tuned LLM performs as well as, or slightly better than, the best fine-tuned BERT LM, although the differences are not significant. However, the best LLM is also shown to correct OCR errors in some cases, as initially hypothesised.

Abstract (translated)

已经证明,像BERT这样的语言模型在识别命名实体(NE)方面表现良好。通常,BERT LM被用作分类器,将输入文本中的单个词分类为属于一系列可能NE类别的 span。在本文中,我们假设可以使用 decoder-only Large Language Models(LLMs)进行生成,既可以提取NE,又可以可能恢复NE的正确表面形式,其中输入文本中存在的任何拼写错误都会自动更正。我们对两个BERT LM和八个开源LLM进行了微调,作为 baseline,并在对日本商店收据进行光学字符识别(OCR)后生成NE的任务上进行了训练。在这项工作中,我们没有试图在文本中寻找或评估NE的位置。我们证明了最佳微调的LLM的表现与最佳微调的BERT LM相当,或者略有更好;尽管这些差异并不显著。然而,最佳LLM也被发现可以在某些情况下纠正OCR错误,正如最初假设的那样。

URL

https://arxiv.org/abs/2403.00528

PDF

https://arxiv.org/pdf/2403.00528.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot