Abstract
The successes of foundation models such as ChatGPT and AlphaFold have spurred significant interest in building similar models for electronic medical records (EMRs) to improve patient care and hospital operations. However, recent hype has obscured critical gaps in our understanding of these models' capabilities. We review over 80 foundation models trained on non-imaging EMR data (i.e. clinical text and/or structured data) and create a taxonomy delineating their architectures, training data, and potential use cases. We find that most models are trained on small, narrowly-scoped clinical datasets (e.g. MIMIC-III) or broad, public biomedical corpora (e.g. PubMed) and are evaluated on tasks that do not provide meaningful insights on their usefulness to health systems. In light of these findings, we propose an improved evaluation framework for measuring the benefits of clinical foundation models that is more closely grounded to metrics that matter in healthcare.
Abstract (translated)
像 ChatGPT 和 AlphaFold 等基因为改善患者护理和医院运营而引起了巨大的兴趣,但是最近的繁荣掩盖了我们对这些模型能力的关键差距的理解。我们回顾了超过 80 个基于非成像 EMR 数据(即临床文本和/或结构化数据)的训练基因为建立类似模型的目标,并建立了分类器,描述了它们的架构、训练数据和潜在用途。我们发现,大多数模型都训练在小型、狭隘的临床试验数据(如 MIMIC-III)或广泛的公共生物医学库(如 PubMed)上,并且在评估任务中无法提供对它们对医疗系统有用性的有意义 insights。基于这些发现,我们提出了一个改进的评估框架,用于测量临床基因为改善医疗系统所带来好处,更加接近在医疗保健中重要的指标。
URL
https://arxiv.org/abs/2303.12961