Paper Reading AI Learner

Analyzing the Weighted Nuclear Norm Minimization and Nuclear Norm Minimization based on Group Sparse Representation

2018-07-19 03:11:49
Zhiyuan Zha, Xin Yuan, Bei Li, Xinggan Zhang, Xin Liu, Lan Tang, Ying-Chang Liang

Abstract

Rank minimization methods have attracted considerable interest in various areas, such as computer vision and machine learning. The most representative work is nuclear norm minimization (NNM), which can recover the matrix rank exactly under some restricted and theoretical guarantee conditions. However, for many real applications, NNM is not able to approximate the matrix rank accurately, since it often tends to over-shrink the rank components. To rectify the weakness of NNM, recent advances have shown that weighted nuclear norm minimization (WNNM) can achieve a better matrix rank approximation than NNM, which heuristically set the weight being inverse to the singular values. However, it still lacks a sound mathematical explanation on why WNNM is more feasible than NNM. In this paper, we propose a scheme to analyze WNNM and NNM from the perspective of the group sparse representation. Specifically, we design an adaptive dictionary to bridge the gap between the group sparse representation and the rank minimization models. Based on this scheme, we provide a mathematical derivation to explain why WNNM is more feasible than NNM. Moreover, due to the heuristical set of the weight, WNNM sometimes pops out error in the operation of SVD, and thus we present an adaptive weight setting scheme to avoid this error. We then employ the proposed scheme on two low-level vision tasks including image denoising and image inpainting. Experimental results demonstrate that WNNM is more feasible than NNM and the proposed scheme outperforms many current state-of-the-art methods.

Abstract (translated)

秩最小化方法已经在诸如计算机视觉和机器学习的各个领域引起了相当大的兴趣。最具代表性的工作是核规范最小化(NNM),它可以在一些限制和理论保证条件下准确地恢复矩阵秩。然而,对于许多实际应用,NNM不能准确地近似矩阵秩,因为它经常倾向于过度缩减秩组件。为了纠正NNM的弱点,最近的进展表明,加权核范数最小化(WNNM)可以实现比NNM更好的矩阵秩近似,其启发式地将权重设置为与奇异值相反。然而,对于为什么WNNM比NNM更可行,它仍然缺乏合理的数学解释。在本文中,我们提出了一种从群体稀疏表示的角度分析WNNM和NNM的方案。具体来说,我们设计了一个自适应字典来弥合组稀疏表示和秩最小化模型之间的差距。基于该方案,我们提供了一个数学推导来解释为什么WNNM比NNM更可行。此外,由于权重的启发式设置,WNNM有时会在SVD的操作中弹出错误,因此我们提出了一种自适应权重设置方案来避免这种错误。然后,我们将所提出的方案应用于两个低级视觉任务,包括图像去噪和图像修复。实验结果表明,WNNM比NNM更可行,并且所提出的方案优于许多当前最先进的方法。

URL

https://arxiv.org/abs/1702.04463

PDF

https://arxiv.org/pdf/1702.04463.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot