Paper Reading AI Learner

A Comparative Study for the Nuclear Norms Minimization Methods

2019-05-10 08:58:01
Zhiyuan Zha, Bihan Wen, Jiachao Zhang, Jiantao Zhou, Ce Zhu

Abstract

The nuclear norm minimization (NNM) is commonly used to approximate the matrix rank by shrinking all singular values equally. However, the singular values have clear physical meanings in many practical problems, and NNM may not be able to faithfully approximate the matrix rank. To alleviate the above-mentioned limitation of NNM, recent studies have suggested that the weighted nuclear norm minimization (WNNM) can achieve a better rank estimation than NNM, which heuristically set the weight being inverse to the singular values. However, it still lacks a rigorous explanation why WNNM is more effective than NMM in various applications. In this paper, we analyze NNM and WNNM from the perspective of group sparse representation (GSR). Concretely, an adaptive dictionary learning method is devised to connect the rank minimization and GSR models. Based on the proposed dictionary, we prove that NNM and WNNM are equivalent to L1-norm minimization and the weighted L1-norm minimization in GSR, respectively. Inspired by enhancing sparsity of the weighted L1-norm minimization in comparison with L1-norm minimization in sparse representation, we thus explain that WNNM is more effective than NMM. By integrating the image nonlocal self-similarity (NSS) prior with the WNNM model, we then apply it to solve the image denoising problem. Experimental results demonstrate that WNNM is more effective than NNM and outperforms several state-of-the-art methods in both objective and perceptual quality.

Abstract (translated)

核范数最小化(nnm)常用于通过平均收缩所有奇异值来近似矩阵秩。然而,在许多实际问题中,奇异值具有明确的物理意义,而nnm可能无法准确地近似矩阵秩。为了缓解上述NNM的局限性,近年来的研究表明,加权核范数最小化(WNNM)比通过启发式方法将权重设为奇异值的倒数的NNM能获得更好的秩估计。然而,它仍然缺乏一个严格的解释为什么WNNM比NMM在各种应用中更有效。本文从群稀疏表示(GSR)的角度分析了NNM和WNNM。具体地,设计了一种自适应字典学习方法,将等级最小化与GSR模型相连接。基于所提出的词典,我们证明了在GSR中,nnm和wnnm分别等价于l1范数最小化和加权l1范数最小化。受增强加权l1范数最小化与稀疏表示l1范数最小化的稀疏性的启发,我们解释了wnnm比nmm更有效。将图像非局部自相似(NSS)先验与小波神经网络模型相结合,解决了图像去噪问题。实验结果表明,小波神经网络比小波神经网络更为有效,在客观和感性两个方面都优于几种最先进的方法。

URL

https://arxiv.org/abs/1608.04517

PDF

https://arxiv.org/pdf/1608.04517.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot