Abstract
Implicit neural representations (INRs) have emerged as a powerful tool for solving inverse problems in computer vision and computational imaging. INRs represent images as continuous domain functions realized by a neural network taking spatial coordinates as inputs. However, unlike traditional pixel representations, little is known about the sample complexity of estimating images using INRs in the context of linear inverse problems. Towards this end, we study the sampling requirements for recovery of a continuous domain image from its low-pass Fourier samples by fitting a single hidden-layer INR with ReLU activation and a Fourier features layer using a generalized form of weight decay regularization. Our key insight is to relate minimizers of this non-convex parameter space optimization problem to minimizers of a convex penalty defined over an infinite-dimensional space of measures. We identify a sufficient number of Fourier samples for which an image realized by an INR is exactly recoverable by solving the INR training problem. To validate our theory, we empirically assess the probability of achieving exact recovery of images realized by low-width single hidden-layer INRs, and illustrate the performance of INRs on super-resolution recovery of continuous domain phantom images.
Abstract (translated)
隐式神经表示(INRs)已作为解决计算机视觉和计算成像中的逆问题的强大工具出现。INRs将图像表示为由神经网络实现的连续域函数,该网络以空间坐标作为输入。然而,与传统的像素表示不同,在线性逆问题上下文中使用INR估计图像所需的样本复杂度知之甚少。 为此,我们研究了通过拟合带有ReLU激活和傅里叶特征层的单隐藏层INR来从其低通傅里叶采样中恢复连续域图像所需的基本采样要求,并利用广义形式的权重衰减正则化。我们的关键见解是将这种非凸参数空间优化问题的极小值与在无限维测度空间上定义的凸惩罚的极小值联系起来。我们确定了对于使用INR实现的图像可以精确恢复所需的足够傅里叶采样数量,只需解决INR训练问题。 为了验证我们的理论,我们通过实证评估低宽度单隐藏层INRs实现的图像能够达到精确恢复的概率,并展示了在连续域假想图像超分辨率重建中的性能。
URL
https://arxiv.org/abs/2506.09949