Abstract
In clinical practice, tri-modal medical image fusion, compared to the existing dual-modal technique, can provide a more comprehensive view of the lesions, aiding physicians in evaluating the disease's shape, location, and biological activity. However, due to the limitations of imaging equipment and considerations for patient safety, the quality of medical images is usually limited, leading to sub-optimal fusion performance, and affecting the depth of image analysis by the physician. Thus, there is an urgent need for a technology that can both enhance image resolution and integrate multi-modal information. Although current image processing methods can effectively address image fusion and super-resolution individually, solving both problems synchronously remains extremely challenging. In this paper, we propose TFS-Diff, a simultaneously realize tri-modal medical image fusion and super-resolution model. Specially, TFS-Diff is based on the diffusion model generation of a random iterative denoising process. We also develop a simple objective function and the proposed fusion super-resolution loss, effectively evaluates the uncertainty in the fusion and ensures the stability of the optimization process. And the channel attention module is proposed to effectively integrate key information from different modalities for clinical diagnosis, avoiding information loss caused by multiple image processing. Extensive experiments on public Harvard datasets show that TFS-Diff significantly surpass the existing state-of-the-art methods in both quantitative and visual evaluations. The source code will be available at GitHub.
Abstract (translated)
在临床实践中,与现有的双模技术相比,三模态医学图像融合可以提供更全面的病变观察,帮助医生评估疾病的形状、位置和生物活性。然而,由于成像设备的局限性和患者安全的考虑,医学图像的质量通常有限,导致融合性能不理想,影响医生对图像分析的深度。因此,迫切需要一种可以同时增强图像分辨率并整合多模态信息的解决方案。尽管当前的图像处理方法可以有效地解决图像融合和超分辨率问题,但同时解决这两个问题仍然具有挑战性。在本文中,我们提出了TFS-Diff,一种同时实现三模态医学图像融合和超分辨率模型的方法。特别地,TFS-Diff基于随机迭代去噪过程的扩散模型生成。我们还开发了一个简单的目标函数和提出的融合超分辨率损失,有效地评估了融合的不确定性,并确保了优化过程的稳定性。此外,通道关注模块被提出,可以有效地整合不同模态的关键信息,避免信息丢失。对公共Harvard数据集的广泛实验证明,TFS-Diff在数量和视觉评价方面显著超越了现有的最先进方法。源代码将 available at GitHub。
URL
https://arxiv.org/abs/2404.17357