Paper Reading AI Learner

AnoNet: Weakly Supervised Anomaly Detection in Textured Surfaces

2019-11-24 21:05:35
Manpreet Singh Minhas, John Zelek

Abstract

Humans can easily detect a defect (anomaly) because it is different or salient when compared to the surface it resides on. Today, manual human visual inspection is still the norm because it is difficult to automate anomaly detection. Neural networks are a useful tool that can teach a machine to find defects. However, they require a lot of training examples to learn what a defect is and it is tedious and expensive to get these samples. We tackle the problem of teaching a network with a low number of training samples with a system we call AnoNet. AnoNet's architecture is similar to CompactCNN with the exceptions that (1) it is a fully convolutional network and does not use strided convolution; (2) it is shallow and compact which minimizes over-fitting by design; (3) the compact design constrains the size of intermediate features which allows training to be done without image downsizing; (4) the model footprint is low making it suitable for edge computation; and (5) the anomaly can be detected and localized despite the weak labelling. AnoNet learns to detect the underlying shape of the anomalies despite the weak annotation as well as preserves the spatial localization of the anomaly. Pre-seeding AnoNet with an engineered filter bank initialization technique reduces the total samples required for training and also achieves state-of-the-art performance. Compared to the CompactCNN, AnoNet achieved a massive 94% reduction of network parameters from 1.13 million to 64 thousand parameters. Experiments were conducted on four data-sets and results were compared against CompactCNN and DeepLabv3. AnoNet improved the performance on an average across all data-sets by 106% to an F1 score of 0.98 and by 13% to an AUROC value of 0.942. AnoNet can learn from a limited number of images. For one of the data-sets, AnoNet learnt to detect anomalies after a single pass through just 53 training images.

Abstract (translated)

URL

https://arxiv.org/abs/1911.10608

PDF

https://arxiv.org/pdf/1911.10608.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot