Paper Reading AI Learner

AutoRE: Document-Level Relation Extraction with Large Language Models

2024-03-21 23:48:21
Xue Lilong, Zhang Dan, Dong Yuxiao, Tang Jie

Abstract

Large Language Models (LLMs) have demonstrated exceptional abilities in comprehending and generating text, motivating numerous researchers to utilize them for Information Extraction (IE) purposes, including Relation Extraction (RE). Nonetheless, most existing methods are predominantly designed for Sentence-level Relation Extraction (SentRE) tasks, which typically encompass a restricted set of relations and triplet facts within a single sentence. Furthermore, certain approaches resort to treating relations as candidate choices integrated into prompt templates, leading to inefficient processing and suboptimal performance when tackling Document-Level Relation Extraction (DocRE) tasks, which entail handling multiple relations and triplet facts distributed across a given document, posing distinct challenges. To overcome these limitations, we introduce AutoRE, an end-to-end DocRE model that adopts a novel RE extraction paradigm named RHF (Relation-Head-Facts). Unlike existing approaches, AutoRE does not rely on the assumption of known relation options, making it more reflective of real-world scenarios. Additionally, we have developed an easily extensible RE framework using a Parameters Efficient Fine Tuning (PEFT) algorithm (QLoRA). Our experiments on the RE-DocRED dataset showcase AutoRE's best performance, achieving state-of-the-art results, surpassing TAG by 10.03% and 9.03% respectively on the dev and test set.

Abstract (translated)

大语言模型(LLMs)在理解和生成文本方面表现出色,这促使许多研究人员将它们应用于信息抽取(IE)任务,包括关系抽取(RE)。然而,现有的方法主要针对句子级别的关系抽取(SentRE)任务,通常涵盖单个句子内的有限关系和三元组事实。此外,某些方法将关系视为提示模板中的候选选项,导致处理效率低下,性能较差,当处理文档级别关系抽取(DocRE)任务时,这会带来明确的挑战。为了克服这些限制,我们引入了AutoRE,一种端到端的关系抽取模型,采用名为RHF(关系-头-事实)的新颖关系抽取范式。与现有方法不同,AutoRE不依赖于已知的关系选项,因此更贴近现实场景。此外,我们还使用参数有效微调(PEFT)算法(QLoRA)开发了一个易于扩展的关系抽取框架。我们对RE-DocRED数据集的实验结果展示了AutoRE的最佳性能,实现了最先进的水平,分别比TAG提高了10.03%和9.03%。

URL

https://arxiv.org/abs/2403.14888

PDF

https://arxiv.org/pdf/2403.14888.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot