Abstract
In acupuncture therapy, the accurate location of acupoints is essential for its effectiveness. The advanced language understanding capabilities of large language models (LLMs) like Generative Pre-trained Transformers (GPT) present a significant opportunity for extracting relations related to acupoint locations from textual knowledge sources. This study aims to compare the performance of GPT with traditional deep learning models (Long Short-Term Memory (LSTM) and Bidirectional Encoder Representations from Transformers for Biomedical Text Mining (BioBERT)) in extracting acupoint-related location relations and assess the impact of pretraining and fine-tuning on GPT's performance. We utilized the World Health Organization Standard Acupuncture Point Locations in the Western Pacific Region (WHO Standard) as our corpus, which consists of descriptions of 361 acupoints. Five types of relations ('direction_of,' 'distance_of,' 'part_of,' 'near_acupoint,' and 'located_near') (n= 3,174) between acupoints were annotated. Five models were compared: BioBERT, LSTM, pre-trained GPT-3.5, and fine-tuned GPT-3.5, as well as pre-trained GPT-4. Performance metrics included micro-average exact match precision, recall, and F1 scores. Our results demonstrate that fine-tuned GPT-3.5 consistently outperformed other models in F1 scores across all relation types. Overall, it achieved the highest micro-average F1 score of 0.92. This study underscores the effectiveness of LLMs like GPT in extracting relations related to acupoint locations, with implications for accurately modeling acupuncture knowledge and promoting standard implementation in acupuncture training and practice. The findings also contribute to advancing informatics applications in traditional and complementary medicine, showcasing the potential of LLMs in natural language processing.
Abstract (translated)
在针灸治疗中,准确地定位穴位对治疗效果至关重要。大型语言模型(如Generative Pre-trained Transformers(GPT))的高级语言理解能力为从文本知识来源中提取与穴位位置相关的关系提供了重大机会。本研究旨在比较GPT与传统深度学习模型(包括Long Short-Term Memory(LSTM)和Bidirectional Encoder Representations from Transformers for Biomedical Text Mining(BioBERT))在提取穴位位置相关关系方面的性能,并评估预训练和微调对GPT性能的影响。我们使用了世界卫生组织西太平洋地区标准穴位位置作为我们的数据集,它包括对361个穴位的描述。我们对穴位之间的五种关系(包括方向、距离、部分、附近和位于附近)进行了注释,共n=3,174个关系。我们比较了五种模型:BioBERT、LSTM、预训练的GPT-3.5和微调的GPT-3.5,以及预训练的GPT-4。性能指标包括微平均精确匹配精度、召回率和F1分数。我们的结果表明,微调的GPT-3.5在所有关系类型的F1得分上始终优于其他模型。总体而言,它实现了最高的微平均F1分数0.92。这项研究突出了LLM(如GPT)在提取穴位位置相关关系方面的有效性,以及其对准确建模针灸知识和促进标准化在针灸培训和实践中的推动作用。研究结果还促进了传统和补充医学的信息技术应用,展示了LLM在自然语言处理方面的潜力。
URL
https://arxiv.org/abs/2404.05415