Abstract
Large language models, such as GPT-4 and Med-PaLM, have shown impressive performance on clinical tasks; however, they require access to compute, are closed-source, and cannot be deployed on device. Mid-size models such as BioGPT-large, BioMedLM, LLaMA 2, and Mistral 7B avoid these drawbacks, but their capacity for clinical tasks has been understudied. To help assess their potential for clinical use and help researchers decide which model they should use, we compare their performance on two clinical question-answering (QA) tasks: MedQA and consumer query answering. We find that Mistral 7B is the best performing model, winning on all benchmarks and outperforming models trained specifically for the biomedical domain. While Mistral 7B's MedQA score of 63.0% approaches the original Med-PaLM, and it often can produce plausible responses to consumer health queries, room for improvement still exists. This study provides the first head-to-head assessment of open source mid-sized models on clinical tasks.
Abstract (translated)
大语言模型,如GPT-4和Med-PaLM,在临床任务上表现出色;然而,它们需要访问计算资源,是闭源的,且无法在设备上部署。中大型模型,如BioGPT-large、BioMedLM、LLaMA 2和Mistral 7B,避免了这些缺点,但它们在临床任务上的能力仍被低估。为了帮助评估它们在临床应用中的潜力,并帮助研究人员决定应使用哪种模型,我们比较了它们在两个临床问答(QA)任务上的表现:MedQA和消费者问题回答。我们发现,Mistral 7B是表现最好的模型,在基准测试和专门针对生物医学领域的模型上均获胜。虽然Mistral 7B的MedQA得分为63.0%接近原始的Med-PaLM,但它经常只能对消费者健康问题提供合理的回答,仍有改进的空间。这项研究是开源中大型模型在临床任务上首次直接的比较。
URL
https://arxiv.org/abs/2404.15894