Paper Reading AI Learner

A Comparative Study of Pretrained Language Models for Long Clinical Text

2023-01-27 16:50:29
Yikuan Li, Ramsey M. Wehbe, Faraz S. Ahmad, Hanyin Wang, Yuan Luo

Abstract

Objective: Clinical knowledge enriched transformer models (e.g., ClinicalBERT) have state-of-the-art results on clinical NLP (natural language processing) tasks. One of the core limitations of these transformer models is the substantial memory consumption due to their full self-attention mechanism, which leads to the performance degradation in long clinical texts. To overcome this, we propose to leverage long-sequence transformer models (e.g., Longformer and BigBird), which extend the maximum input sequence length from 512 to 4096, to enhance the ability to model long-term dependencies in long clinical texts. Materials and Methods: Inspired by the success of long sequence transformer models and the fact that clinical notes are mostly long, we introduce two domain enriched language models, Clinical-Longformer and Clinical-BigBird, which are pre-trained on a large-scale clinical corpus. We evaluate both language models using 10 baseline tasks including named entity recognition, question answering, natural language inference, and document classification tasks. Results: The results demonstrate that Clinical-Longformer and Clinical-BigBird consistently and significantly outperform ClinicalBERT and other short-sequence transformers in all 10 downstream tasks and achieve new state-of-the-art results. Discussion: Our pre-trained language models provide the bedrock for clinical NLP using long texts. We have made our source code available at this https URL, and the pre-trained models available for public download at: this https URL. Conclusion: This study demonstrates that clinical knowledge enriched long-sequence transformers are able to learn long-term dependencies in long clinical text. Our methods can also inspire the development of other domain-enriched long-sequence transformers.

Abstract (translated)

目标:增加临床知识Transformer模型(例如临床BERT)在临床自然语言处理任务中的顶尖结果。这些Transformer模型的核心限制之一是其全自注意力机制造成的大量内存消耗,导致长临床文本的性能下降。为了克服这一点,我们建议利用长序列Transformer模型(例如Longformer和Big Bird),这些模型扩展了最大输入序列长度从512到4096,以提高在长临床文本中建模长期依赖的能力。 材料和方法:受到长序列Transformer模型的成功启发,以及临床笔记通常很长这一事实,我们介绍了两个领域丰富语言模型,临床-Longformer和临床-Big Bird,它们是基于大型临床语料库进行预训练的。我们使用10个基准任务(包括命名实体识别、回答问题、自然语言推理和文档分类任务)评估 both 语言模型。 结果:结果表明,临床-Longformer和临床-Big Bird在所有10个后续任务中 consistently 和 significantly 优于其他短序列Transformer模型和其他短期Transformer模型,并实现了新的顶尖结果。 讨论:我们的预训练语言模型提供了使用长篇文本进行临床自然语言处理任务的基础。我们在此httpsURL上提供了我们的源代码,而预训练模型在此httpsURL上可供公众下载。 结论:这表明,增加临床知识长序列Transformer模型能够学习长临床文本中的长期依赖。我们的方法还可以启发开发其他领域丰富长序列Transformer模型。

URL

https://arxiv.org/abs/2301.11847

PDF

https://arxiv.org/pdf/2301.11847.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot