Paper Reading AI Learner

MMRAG: Multi-Mode Retrieval-Augmented Generation with Large Language Models for Biomedical In-Context Learning

2025-02-21 21:36:48
Zaifu Zhan, Jun Wang, Shuang Zhou, Jiawen Deng, Rui Zhang

Abstract

Objective: To optimize in-context learning in biomedical natural language processing by improving example selection. Methods: We introduce a novel multi-mode retrieval-augmented generation (MMRAG) framework, which integrates four retrieval strategies: (1) Random Mode, selecting examples arbitrarily; (2) Top Mode, retrieving the most relevant examples based on similarity; (3) Diversity Mode, ensuring variation in selected examples; and (4) Class Mode, selecting category-representative examples. This study evaluates MMRAG on three core biomedical NLP tasks: Named Entity Recognition (NER), Relation Extraction (RE), and Text Classification (TC). The datasets used include BC2GM for gene and protein mention recognition (NER), DDI for drug-drug interaction extraction (RE), GIT for general biomedical information extraction (RE), and HealthAdvice for health-related text classification (TC). The framework is tested with two large language models (Llama2-7B, Llama3-8B) and three retrievers (Contriever, MedCPT, BGE-Large) to assess performance across different retrieval strategies. Results: The results from the Random mode indicate that providing more examples in the prompt improves the model's generation performance. Meanwhile, Top mode and Diversity mode significantly outperform Random mode on the RE (DDI) task, achieving an F1 score of 0.9669, a 26.4% improvement. Among the three retrievers tested, Contriever outperformed the other two in a greater number of experiments. Additionally, Llama 2 and Llama 3 demonstrated varying capabilities across different tasks, with Llama 3 showing a clear advantage in handling NER tasks. Conclusion: MMRAG effectively enhances biomedical in-context learning by refining example selection, mitigating data scarcity issues, and demonstrating superior adaptability for NLP-driven healthcare applications.

Abstract (translated)

**目标:** 在生物医学自然语言处理中优化上下文学习,通过改进示例选择来提高性能。 **方法:** 我们引入了一种新颖的多模式检索增强生成(MMRAG)框架。该框架集成了四种检索策略:(1) 随机模式,随机选取例子;(2) 顶级模式,基于相似性提取最相关的例子;(3) 多样性模式,确保所选示例的变化性;以及 (4) 分类模式,选择具有代表性的类别实例。这项研究在三个核心生物医学NLP任务上评估了MMRAG:命名实体识别(NER)、关系抽取(RE)和文本分类(TC)。使用的数据集包括BC2GM用于基因和蛋白质提及识别(NER),DDI用于药物-药物相互作用提取(RE),GIT用于一般生物医学信息提取(RE),以及HealthAdvice用于与健康相关的文本分类(TC)。框架在两个大型语言模型(Llama2-7B,Llama3-8B)和三个检索器(Contriever,MedCPT,BGE-Large)上进行了测试,以评估不同检索策略下的性能。 **结果:** 随机模式的结果表明,在提示中提供更多的例子可以提高模型的生成表现。同时,顶级模式和多样性模式在关系抽取任务(DDI)中显著优于随机模式,取得了F1得分为0.9669的成绩,比随机模式提高了26.4%。在这三个检索器中,Contriever 在更多实验中表现更好于其他两个。此外,在不同的任务上,Llama 2和Llama 3表现出不同程度的能力,其中Llama 3在处理NER任务方面明显占优势。 **结论:** MMRAG通过优化示例选择来有效提高生物医学上下文学习的性能,减轻数据稀缺问题,并展示了对NLP驱动医疗应用的高度适应性。

URL

https://arxiv.org/abs/2502.15954

PDF

https://arxiv.org/pdf/2502.15954.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot