Abstract
The nuclear norm minimization (NNM) is commonly used to approximate the matrix rank by shrinking all singular values equally. However, the singular values have clear physical meanings in many practical problems, and NNM may not be able to faithfully approximate the matrix rank. To alleviate the above-mentioned limitation of NNM, recent studies have suggested that the weighted nuclear norm minimization (WNNM) can achieve a better rank estimation than NNM, which heuristically set the weight being inverse to the singular values. However, it still lacks a rigorous explanation why WNNM is more effective than NMM in various applications. In this paper, we analyze NNM and WNNM from the perspective of group sparse representation (GSR). Concretely, an adaptive dictionary learning method is devised to connect the rank minimization and GSR models. Based on the proposed dictionary, we prove that NNM and WNNM are equivalent to L1-norm minimization and the weighted L1-norm minimization in GSR, respectively. Inspired by enhancing sparsity of the weighted L1-norm minimization in comparison with L1-norm minimization in sparse representation, we thus explain that WNNM is more effective than NMM. By integrating the image nonlocal self-similarity (NSS) prior with the WNNM model, we then apply it to solve the image denoising problem. Experimental results demonstrate that WNNM is more effective than NNM and outperforms several state-of-the-art methods in both objective and perceptual quality.
Abstract (translated)
核范数最小化(nnm)常用于通过平均收缩所有奇异值来近似矩阵秩。然而,在许多实际问题中,奇异值具有明确的物理意义,而nnm可能无法准确地近似矩阵秩。为了缓解上述NNM的局限性,近年来的研究表明,加权核范数最小化(WNNM)比通过启发式方法将权重设为奇异值的倒数的NNM能获得更好的秩估计。然而,它仍然缺乏一个严格的解释为什么WNNM比NMM在各种应用中更有效。本文从群稀疏表示(GSR)的角度分析了NNM和WNNM。具体地,设计了一种自适应字典学习方法,将等级最小化与GSR模型相连接。基于所提出的词典,我们证明了在GSR中,nnm和wnnm分别等价于l1范数最小化和加权l1范数最小化。受增强加权l1范数最小化与稀疏表示l1范数最小化的稀疏性的启发,我们解释了wnnm比nmm更有效。将图像非局部自相似(NSS)先验与小波神经网络模型相结合,解决了图像去噪问题。实验结果表明,小波神经网络比小波神经网络更为有效,在客观和感性两个方面都优于几种最先进的方法。
URL
https://arxiv.org/abs/1608.04517