Abstract
Objective: Clinical knowledge enriched transformer models (e.g., ClinicalBERT) have state-of-the-art results on clinical NLP (natural language processing) tasks. One of the core limitations of these transformer models is the substantial memory consumption due to their full self-attention mechanism, which leads to the performance degradation in long clinical texts. To overcome this, we propose to leverage long-sequence transformer models (e.g., Longformer and BigBird), which extend the maximum input sequence length from 512 to 4096, to enhance the ability to model long-term dependencies in long clinical texts. Materials and Methods: Inspired by the success of long sequence transformer models and the fact that clinical notes are mostly long, we introduce two domain enriched language models, Clinical-Longformer and Clinical-BigBird, which are pre-trained on a large-scale clinical corpus. We evaluate both language models using 10 baseline tasks including named entity recognition, question answering, natural language inference, and document classification tasks. Results: The results demonstrate that Clinical-Longformer and Clinical-BigBird consistently and significantly outperform ClinicalBERT and other short-sequence transformers in all 10 downstream tasks and achieve new state-of-the-art results. Discussion: Our pre-trained language models provide the bedrock for clinical NLP using long texts. We have made our source code available at this https URL, and the pre-trained models available for public download at: this https URL. Conclusion: This study demonstrates that clinical knowledge enriched long-sequence transformers are able to learn long-term dependencies in long clinical text. Our methods can also inspire the development of other domain-enriched long-sequence transformers.
Abstract (translated)
目标:增加临床知识Transformer模型(例如临床BERT)在临床自然语言处理任务中的顶尖结果。这些Transformer模型的核心限制之一是其全自注意力机制造成的大量内存消耗,导致长临床文本的性能下降。为了克服这一点,我们建议利用长序列Transformer模型(例如Longformer和Big Bird),这些模型扩展了最大输入序列长度从512到4096,以提高在长临床文本中建模长期依赖的能力。 材料和方法:受到长序列Transformer模型的成功启发,以及临床笔记通常很长这一事实,我们介绍了两个领域丰富语言模型,临床-Longformer和临床-Big Bird,它们是基于大型临床语料库进行预训练的。我们使用10个基准任务(包括命名实体识别、回答问题、自然语言推理和文档分类任务)评估 both 语言模型。 结果:结果表明,临床-Longformer和临床-Big Bird在所有10个后续任务中 consistently 和 significantly 优于其他短序列Transformer模型和其他短期Transformer模型,并实现了新的顶尖结果。 讨论:我们的预训练语言模型提供了使用长篇文本进行临床自然语言处理任务的基础。我们在此httpsURL上提供了我们的源代码,而预训练模型在此httpsURL上可供公众下载。 结论:这表明,增加临床知识长序列Transformer模型能够学习长临床文本中的长期依赖。我们的方法还可以启发开发其他领域丰富长序列Transformer模型。
URL
https://arxiv.org/abs/2301.11847