Abstract
This paper presents a proof-of-concept approach for learned synergistic reconstruction of medical images using multi-branch generative models. Leveraging variational autoencoders (VAEs) and generative adversarial networks (GANs), our models learn from pairs of images simultaneously, enabling effective denoising and reconstruction. Synergistic image reconstruction is achieved by incorporating the trained models in a regularizer that evaluates the distance between the images and the model, in a similar fashion to multichannel dictionary learning (DiL). We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/computed tomography (CT) datasets, showcasing improved image quality and information sharing between modalities. Despite challenges such as patch decomposition and model limitations, our results underscore the potential of generative models for enhancing medical imaging reconstruction.
Abstract (translated)
本文提出了一种利用多分支生成模型进行学习医疗图像协同重构的证明概念方法。通过结合变分自编码器(VAEs)和生成对抗网络(GANs),我们的模型同时学习来自成对图像的距离,实现有效的去噪和重构。协同图像重构通过将训练好的模型集成到评价器中,该评价器评估图像与模型之间的距离,类似于多通道字典学习(DiL)的方式来实现。我们在MNIST和PET/CT数据集上证明了我们方法的有效性,展示了改善的图像质量和模态之间的信息共享。尽管存在诸如补丁分解和模型限制等挑战,但我们的结果强调生成模型的增强医疗图像重构潜力。
URL
https://arxiv.org/abs/2404.08748