Abstract
Magnetic resonance imaging (MRI) is essential for nasopharyngeal carcinoma (NPC) radiotherapy (RT), but practical constraints, such as patient discomfort, long scan times, and high costs often lead to incomplete modalities in clinical practice, compromising RT planning accuracy. Traditional MRI synthesis methods are modality-specific, limited in anatomical adaptability, and lack clinical interpretability-failing to meet NPC's RT needs. Here, we developed a unified foundation model integrating contrastive visual representation learning and vision-language alignment (VLA) to enable any-to-all MRI synthesis. The model uses a contrastive encoder for modality-invariant representations and a CLIP-based text-informed decoder for semantically consistent synthesis, supporting any-to-all MRI synthesis via one unified foundation model. Trained on 40,825 images from 13 institutions, it achieves consistently high performance (average SSIM 0.90, PSNR 27) across 26 internal/external validation sites (15,748 images), with superior synthesis fidelity and robustness to noise and domain shifts. Meanwhile, its unified representation enhances downstream RT-relevant tasks (e.g., segmentation). This work advances digital medicine solutions for NPC care by leveraging foundation models to bridge technical synthesis and clinical utility.
Abstract (translated)
磁共振成像(MRI)对于鼻咽癌(NPC)放射治疗(RT)至关重要,但实际限制如患者不适、长时间的扫描时间和高昂的成本往往导致临床实践中模式不完整,从而影响了RT计划的准确性。传统的MRI合成方法特定于某种模态,且在解剖适应性和临床解释性方面存在局限,无法满足NPC RT的需求。 为此,我们开发了一种结合对比视觉表示学习和视觉-语言对齐(VLA)的统一基础模型,以实现任意到所有模式的MRI合成。该模型利用对比编码器生成模态不变的表示,并采用基于CLIP的方法进行语义一致的解码,支持通过一个统一的基础模型来完成任何到所有MRI模式的转换。此模型在来自13个机构的40,825张图像上进行了训练,在包括内外部验证在内的26个站点(总计15,748张图像)上实现了稳定的高性能(平均SSIM 0.90,PSNR 27),同时具备优异的合成保真度和对噪声及领域变化的强大鲁棒性。此外,该模型的统一表示方式还增强了下游RT相关的任务(如分割任务)。这项工作通过利用基础模型来弥合技术合成与临床实用性的差距,推进了数字医学解决方案在NPC护理中的应用。
URL
https://arxiv.org/abs/2602.08822