Abstract
Specialised pre-trained language models are becoming more frequent in NLP since they can potentially outperform models trained on generic texts. BioBERT and BioClinicalBERT are two examples of such models that have shown promise in medical NLP tasks. Many of these models are overparametrised and resource-intensive, but thanks to techniques like Knowledge Distillation (KD), it is possible to create smaller versions that perform almost as well as their larger counterparts. In this work, we specifically focus on development of compact language models for processing clinical texts (i.e. progress notes, discharge summaries etc). We developed a number of efficient lightweight clinical transformers using knowledge distillation and continual learning, with the number of parameters ranging from 15 million to 65 million. These models performed comparably to larger models such as BioBERT and ClinicalBioBERT and significantly outperformed other compact models trained on general or biomedical data. Our extensive evaluation was done across several standard datasets and covered a wide range of clinical text-mining tasks, including Natural Language Inference, Relation Extraction, Named Entity Recognition, and Sequence Classification. To our knowledge, this is the first comprehensive study specifically focused on creating efficient and compact transformers for clinical NLP tasks. The models and code used in this study can be found on our Huggingface profile at this https URL and Github page at this https URL, respectively, promoting reproducibility of our results.
Abstract (translated)
专业预训练语言模型在自然语言处理任务中越来越常见,因为它们有可能比基于一般文本的训练模型表现更好。BioBERT和BioClinicalBERT是两个在医学NLP任务中表现出前景的此类模型的例子。这些模型往往过度参数化且资源密集型,但得益于知识蒸馏(KD)等技术,可以创建小型版本,其表现几乎与大型版本相同。在本研究中,我们特别关注开发紧凑的语言模型来处理临床文本(例如进展记录、出院小结等)。我们利用知识蒸馏和持续学习开发了许多高效的轻量化临床Transformer,其参数数量从1.5亿到6.5亿。这些模型与大型模型如BioBERT和BioClinicalBioBERT表现相似,并显著优于其他基于一般或生物医学数据的紧凑模型。我们的广泛评估涉及多个标准数据集,涵盖了广泛的临床文本挖掘任务,包括自然语言推理、关系提取、命名实体识别和序列分类。据我们所知,这是第一个专门关注创建高效和紧凑的Transformers,以处理临床NLP任务的研究。这些模型和代码在本研究中可用,分别位于我们的Hugging Face个人页面上,以及GitHub页面上,促进我们结果的重复利用。
URL
https://arxiv.org/abs/2302.04725