Paper Reading AI Learner

Performance analysis of weighted low rank model with sparse image histograms for face recognition under lowlevel illumination and occlusion

2020-07-24 05:59:28
K.V. Sridhar, Raghu vamshi Hemadri

Abstract

In a broad range of computer vision applications, the purpose of Low-rank matrix approximation (LRMA) models is to recover the underlying low-rank matrix from its degraded observation. The latest LRMA methods - Robust Principal Component Analysis (RPCA) resort to using the nuclear norm minimization (NNM) as a convex relaxation of the non-convex rank minimization. However, NNM tends to over-shrink the rank components and treats the different rank components equally, limiting its flexibility in practical applications. We use a more flexible model, namely the Weighted Schatten p-Norm Minimization (WSNM), to generalize the NNM to the Schatten p-norm minimization with weights assigned to different singular values. The proposed WSNM not only gives a better approximation to the original low-rank assumption but also considers the importance of different rank components. In this paper, a comparison of the low-rank recovery performance of two LRMA algorithms- RPCA and WSNM is brought out on occluded human facial images. The analysis is performed on facial images from the Yale database and over own database , where different facial expressions, spectacles, varying illumination account for the facial occlusions. The paper also discusses the prominent trends observed from the experimental results performed through the application of these algorithms. As low-rank images sometimes might fail to capture the details of a face adequately, we further propose a novel method to use the image-histogram of the sparse images thus obtained to identify the individual in any given image. Extensive experimental results show, both qualitatively and quantitatively, that WSNM surpasses RPCA in its performance more effectively by removing facial occlusions, thus giving recovered low-rank images of higher PSNR and SSIM.

Abstract (translated)

URL

https://arxiv.org/abs/2007.12362

PDF

https://arxiv.org/pdf/2007.12362.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot