Abstract
Biomedical literature is a rapidly expanding field of science and technology. Classification of biomedical texts is an essential part of biomedicine research, especially in the field of biology. This work proposes the fine-tuned DistilBERT, a methodology-specific, pre-trained generative classification language model for mining biomedicine texts. The model has proven its effectiveness in linguistic understanding capabilities and has reduced the size of BERT models by 40\% but by 60\% faster. The main objective of this project is to improve the model and assess the performance of the model compared to the non-fine-tuned model. We used DistilBert as a support model and pre-trained on a corpus of 32,000 abstracts and complete text articles; our results were impressive and surpassed those of traditional literature classification methods by using RNN or LSTM. Our aim is to integrate this highly specialised and specific model into different research industries.
Abstract (translated)
生物医学文献是一个快速发展的科学和技术领域。生物医学文献分类是生物医学研究的重要组成部分,尤其是在生物学领域。本文提出了一个针对生物医学文献的微调DistilBERT,一种特定于方法论的预训练生成分类语言模型,用于挖掘生物医学文本。该模型在语言理解能力方面已经证明了其有效性,并将BERT模型的大小缩小了40\%但速度提高了60\%。本项目的主要目标是为该模型改进并评估其与未微调模型的性能。我们将DistilBERT用作支持模型,预先训练在32,000个摘要和完整文章的语料库中;我们的结果令人印象深刻,超过了传统文献分类方法的水平,这是通过使用RNN或LSTM实现的。我们的目标是将这种高度专业化和特定化的模型整合到不同的研究产业中。
URL
https://arxiv.org/abs/2404.13779