Abstract
The deep image prior was recently introduced as a prior for natural images. It represents images as the output of a convolutional network with random inputs. For "inference", gradient descent is performed to adjust network parameters to make the output match observations. This approach yields good performance on a range of image reconstruction tasks. We show that the deep image prior is asymptotically equivalent to a stationary Gaussian process prior in the limit as the number of channels in each layer of the network goes to infinity, and derive the corresponding kernel. This informs a Bayesian approach to inference. We show that by conducting posterior inference using stochastic gradient Langevin we avoid the need for early stopping, which is a drawback of the current approach, and improve results for denoising and impainting tasks. We illustrate these intuitions on a number of 1D and 2D signal reconstruction tasks.
Abstract (translated)
深部图像先验最近被作为自然图像的先验引入。它将图像表示为具有随机输入的卷积网络的输出。对于“推理”,通过梯度下降来调整网络参数,使输出与观测结果相匹配。这种方法在一系列图像重建任务中都能获得良好的性能。我们证明了当网络中每层信道数无穷大时,深图像先验值渐近等价于一个平稳高斯过程先验值,并导出相应的核。这说明了贝叶斯推理方法。我们表明,通过使用随机梯度朗格文进行后验推理,我们避免了早期停止的需要,这是当前方法的一个缺点,并改进了去噪和嵌入任务的结果。我们在许多一维和二维信号重建任务中说明了这些直觉。
URL
https://arxiv.org/abs/1904.07457