Paper Reading AI Learner

Improving Relation Extraction by Pre-trained Language Representations

2019-06-07 13:31:09
Christoph Alt, Marc Hübner, Leonhard Hennig

Abstract

Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.

Abstract (translated)

当前最先进的关系提取方法通常依赖于一组在预处理步骤中显式计算的词汇、句法和语义特征。训练特征提取模型需要额外的注释语言资源,这严重制约了关系提取对新语言的适用性和可移植性。同样,预处理引入了一个额外的错误源。为了解决这些局限性,我们引入了用于关系提取的变压器tre,扩展了openai生成式预培训变压器【Radford等人,2018年】。与以前的关系提取模型不同,Tre使用预先训练的深层语言表示,而不是明确的语言特征来通知关系分类,并将其与自我关注的转换结构相结合,以有效地建模实体提及之间的长期依赖关系。tre允许我们在对关系提取任务中学习的语言表示进行微调之前,通过无监督的预训练,仅从纯文本语料库中学习隐含的语言特征。Tre在Tacred和Semeval 2010任务8数据集上获得了一个新的最先进的结果,分别实现了67.4和87.1的测试f1。此外,我们观察到样品效率显著提高。只有20%的训练示例,tre匹配基线的性能,我们的模型在100%的隐性数据集上从头开始训练。我们开放源码我们训练的模型、实验和源代码。

URL

https://arxiv.org/abs/1906.03088

PDF

https://arxiv.org/pdf/1906.03088.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot