Abstract
Radiology Report Generation (R2Gen) demonstrates how Multi-modal Large Language Models (MLLMs) can automate the creation of accurate and coherent radiological reports. Existing methods often hallucinate details in text-based reports that don't accurately reflect the image content. To mitigate this, we introduce a novel strategy, SERPENT-VLM (SElf Refining Radiology RePort GENeraTion using Vision Language Models), which improves the R2Gen task by integrating a self-refining mechanism into the MLLM framework. We employ a unique self-supervised loss that leverages similarity between pooled image representations and the contextual representations of the generated radiological text, alongside the standard Causal Language Modeling objective, to refine image-text representations. This allows the model to scrutinize and align the generated text through dynamic interaction between a given image and the generated text, therefore reducing hallucination and continuously enhancing nuanced report generation. SERPENT-VLM outperforms existing baselines such as LLaVA-Med, BiomedGPT, etc., achieving SoTA performance on the IU X-ray and Radiology Objects in COntext (ROCO) datasets, and also proves to be robust against noisy images. A qualitative case study emphasizes the significant advancements towards more sophisticated MLLM frameworks for R2Gen, opening paths for further research into self-supervised refinement in the medical imaging domain.
Abstract (translated)
放射学报告生成(R2Gen)展示了如何使用多模态大型语言模型(MLLMs)自动创建准确且连贯的放射学报告。现有方法通常在基于文本的报告中扭曲了文本报告中不准确反映图像内容的细节。为了减轻这种现象,我们引入了一种新颖的策略:SERPENT-VLM(自监督优化放射学报告生成),它通过将自监督机制集成到MLLM框架中来改善R2Gen任务。我们采用了一种独特的自监督损失,该损失利用了聚类图像表示和生成放射学文本的上下文表示之间的相似性,以及标准的因果语言建模目标,来优化图像-文本表示。这使得模型可以通过在给定图像和生成文本之间进行动态交互来审查和调整生成的文本,从而减少扭曲并持续提高细微报告生成。SERPENT-VLM在Context(ROCO)数据集上的SoTA性能优于现有基线,如LLaVA-Med和BiomedGPT等,同时在嘈杂图像上表现出色。一个定性案例研究强调了在医学图像领域更复杂MLLM框架的显著进步,为在医疗成像领域进一步研究自监督优化提供了途径。
URL
https://arxiv.org/abs/2404.17912