Paper Reading AI Learner

Reference-Based Autoencoder for Surface Defect Detection

2022-11-18 07:13:55
Wei Luo, Haiming Yao, Wenyong Yu, Xue Wang

Abstract

Due to the extreme imbalance in the number of normal data and abnormal data, visual anomaly detection is important for the development of industrial automatic product quality inspection. Unsupervised methods based on reconstruction and embedding have been widely studied for anomaly detection, of which reconstruction-based methods are the most popular. However, establishing a unified model for textured surface defect detection remains a challenge because these surfaces can vary in homogeneous and non regularly ways. Furthermore, existing reconstruction-based methods do not have a strong ability to convert the defect feature to the normal feature. To address these challenges, we propose a novel unsupervised reference-based autoencoder (RB-AE) to accurately inspect a variety of textured defects. Unlike most reconstruction-based methods, artificial defects and a novel pixel-level discrimination loss function are utilized for training to enable the model to obtain pixel-level discrimination ability. First, the RB-AE employs an encoding module to extract multi-scale features of the textured surface. Subsequently, a novel reference-based attention module (RBAM) is proposed to convert the defect features to normal features to suppress the reconstruction of defects. In addition, RBAM can also effectively suppress the defective feature residual caused by skip-connection. Next, a decoding module utilizes the repaired features to reconstruct the normal texture background. Finally, a novel multiscale feature discrimination module (MSFDM) is employed to defect detection and segmentation.

Abstract (translated)

URL

https://arxiv.org/abs/2211.10060

PDF

https://arxiv.org/pdf/2211.10060.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot