Abstract
Self-supervised learning for inverse problems allows to train a reconstruction network from noise and/or incomplete data alone. These methods have the potential of enabling learning-based solutions when obtaining ground-truth references for training is expensive or even impossible. In this paper, we propose a new self-supervised learning strategy devised for the challenging setting where measurements are observed via a single incomplete observation model. We introduce a new definition of equivariance in the context of reconstruction networks, and show that the combination of self-supervised splitting losses and equivariant reconstruction networks results in unbiased estimates of the supervised loss. Through a series of experiments on image inpainting, accelerated magnetic resonance imaging, and compressive sensing, we demonstrate that the proposed loss achieves state-of-the-art performance in settings with highly rank-deficient forward models.
Abstract (translated)
自监督学习在逆问题中的应用使得仅通过噪声和/或不完整数据来训练重建网络成为可能。这些方法有望实现基于学习的解决方案,特别是在获取用于训练的真实参考数据昂贵甚至不可能的情况下。本文中,我们提出了一种新的针对测量值仅能通过单一不完整的观察模型获得这一挑战性设置下的自监督学习策略。我们在重建网络背景下引入了等变性的新定义,并证明了结合自我监督分裂损失和等变重建网络可以得到无偏的监督损失估计。通过对图像修复、加速磁共振成像以及压缩感知进行一系列实验,我们展示了所提出的损失函数在具有高度秩亏前向模型设置中达到了最先进的性能。
URL
https://arxiv.org/abs/2510.00929