Abstract
Developing effective biomedical retrieval models is important for excelling at knowledge-intensive biomedical tasks but still challenging due to the deficiency of sufficient publicly annotated biomedical data and computational resources. We present BMRetriever, a series of dense retrievers for enhancing biomedical retrieval via unsupervised pre-training on large biomedical corpora, followed by instruction fine-tuning on a combination of labeled datasets and synthetic pairs. Experiments on 5 biomedical tasks across 11 datasets verify BMRetriever's efficacy on various biomedical applications. BMRetriever also exhibits strong parameter efficiency, with the 410M variant outperforming baselines up to 11.7 times larger, and the 2B variant matching the performance of models with over 5B parameters. The training data and model checkpoints are released at \url{this https URL} to ensure transparency, reproducibility, and application to new domains.
Abstract (translated)
开发有效的生物医学检索模型对于在知识密集型的生物医学任务中取得成功非常重要,但仍具有挑战性,因为缺乏足够的公开标注的生物医学数据和计算资源。我们提出了BMRetriever,一系列用于通过大型生物医学语料库的无监督预训练来增强生物医学检索的大密度的检索器,然后对标签数据和合成对进行指令微调。在11个生物医学数据集上的实验证实了BMRetriever在各种生物医学应用中的有效性。BMRetriever还表现出强大的参数效率,其中410M变体优于基线多达11.7倍,2B变体与拥有超过50亿参数的模型的性能相匹敌。训练数据和模型检查点在\url{这个 https URL}上发布,以确保透明度、可重复性和应用于新领域。
URL
https://arxiv.org/abs/2404.18443