Abstract
Medical report generation from imaging data remains a challenging task in clinical practice. While large language models (LLMs) show great promise in addressing this challenge, their effective integration with medical imaging data still deserves in-depth exploration. In this paper, we present MRG-LLM, a novel multimodal large language model (MLLM) that combines a frozen LLM with a learnable visual encoder and introduces a dynamic prompt customization mechanism. Our key innovation lies in generating instance-specific prompts tailored to individual medical images through conditional affine transformations derived from visual features. We propose two implementations: prompt-wise and promptbook-wise customization, enabling precise and targeted report generation. Extensive experiments on IU X-ray and MIMIC-CXR datasets demonstrate that MRG-LLM achieves state-of-the-art performance in medical report generation. Our code will be made publicly available.
Abstract (translated)
从影像数据生成医疗报告仍然是临床实践中的一项挑战性任务。尽管大型语言模型(LLMs)在应对这一挑战方面显示出巨大潜力,但它们与医学影像数据的有效整合仍需深入研究。在这篇论文中,我们提出了MRG-LLM,这是一种新颖的多模态大型语言模型(MLLM),它结合了一个冻结的LLM和一个可学习的视觉编码器,并引入了一种动态提示定制机制。我们的关键创新在于通过从视觉特征派生出的条件仿射变换为每个单独的医学影像生成特定实例的提示。我们提出了两种实施方式:基于提示的和基于提示手册的定制,使精准且目标明确的报告生成成为可能。在IU X光数据集和MIMIC-CXR数据集上的广泛实验表明,MRG-LLM在医疗报告生成方面达到了最先进的性能水平。我们的代码将公开发布。
URL
https://arxiv.org/abs/2506.15477