Abstract
When Large Vision Language Models (LVLMs) are applied to multimodal medical generative tasks, they suffer from significant model hallucination issues. This severely impairs the model's generative accuracy, making it challenging for LVLMs to be implemented in real-world medical scenarios to assist doctors in diagnosis. Enhancing the training data for downstream medical generative tasks is an effective way to address model hallucination. Moreover, the limited availability of training data in the medical field and privacy concerns greatly hinder the model's accuracy and generalization capabilities. In this paper, we introduce a method that mimics human cognitive processes to construct fine-grained instruction pairs and apply the concept of chain-of-thought (CoT) from inference scenarios to training scenarios, thereby proposing a method called MedThink. Our experiments on various LVLMs demonstrate that our novel data construction method tailored for the medical domain significantly improves the model's performance in medical image report generation tasks and substantially mitigates the hallucinations. All resources of this work will be released soon.
Abstract (translated)
当大型视觉语言模型(LVLMs)应用于多模态医疗生成任务时,它们会严重受到模型幻觉问题的影响。这严重削弱了模型的生成准确性,使得LVLMs在现实世界医疗场景中无法协助医生进行诊断。提高下游医疗生成任务的训练数据是解决模型幻觉的有效方法。此外,医疗领域中训练数据的有限性和隐私问题极大地影响了模型的准确性和泛化能力。在本文中,我们提出了一种模仿人类认知过程的方法来构建细粒度指令对,并将从推理场景中的连锁思维(CoT)应用到训练场景中,从而提出了名为MedThink的方法。我们对各种LVLM的实验证明表明,我们针对医疗领域定制的方法显著提高了模型在医疗图像报告生成任务中的性能,并极大地减轻了幻觉问题。本工作的所有资源都将很快发布。
URL
https://arxiv.org/abs/2406.11451