Paper Reading AI Learner

Advancing Pre-trained Teacher: Towards Robust Feature Discrepancy for Anomaly Detection

2024-05-03 13:00:22
Canhui Tang, Sanping Zhou, Yizhe Li, Yonghao Dong, Le Wang

Abstract

With the wide application of knowledge distillation between an ImageNet pre-trained teacher model and a learnable student model, industrial anomaly detection has witnessed a significant achievement in the past few years. The success of knowledge distillation mainly relies on how to keep the feature discrepancy between the teacher and student model, in which it assumes that: (1) the teacher model can jointly represent two different distributions for the normal and abnormal patterns, while (2) the student model can only reconstruct the normal distribution. However, it still remains a challenging issue to maintain these ideal assumptions in practice. In this paper, we propose a simple yet effective two-stage industrial anomaly detection framework, termed as AAND, which sequentially performs Anomaly Amplification and Normality Distillation to obtain robust feature discrepancy. In the first anomaly amplification stage, we propose a novel Residual Anomaly Amplification (RAA) module to advance the pre-trained teacher encoder. With the exposure of synthetic anomalies, it amplifies anomalies via residual generation while maintaining the integrity of pre-trained model. It mainly comprises a Matching-guided Residual Gate and an Attribute-scaling Residual Generator, which can determine the residuals' proportion and characteristic, respectively. In the second normality distillation stage, we further employ a reverse distillation paradigm to train a student decoder, in which a novel Hard Knowledge Distillation (HKD) loss is built to better facilitate the reconstruction of normal patterns. Comprehensive experiments on the MvTecAD, VisA, and MvTec3D-RGB datasets show that our method achieves state-of-the-art performance.

Abstract (translated)

知识蒸馏在工业异常检测中的应用已经取得了显著成就。知识蒸馏的成功主要依赖于如何保持教师和 student模型之间的特征差异,其中它假设:(1)教师模型可以共同表示正常和异常模式的两种不同分布,而(2)学生模型只能重构正常分布。然而,在实践中仍然存在一个具有挑战性的问题,即维持这些理想假设。在本文中,我们提出了一个简单而有效的工业异常检测框架,称为AAND,它分为两个阶段依次执行异常增强和正常分化。在第一个异常增强阶段,我们提出了一个新的残差异常增强(RAA)模块,以提高预训练教师编码器的性能。通过暴露合成异常,它通过残差生成来放大异常,同时保持预训练模型的完整性。它主要由一个匹配引导的残差门和一个属性缩放的残差生成器组成,可以分别确定残差的比率和特征。在第二个正则化分化阶段,我们进一步采用反向蒸馏范式训练学生解码器,其中构建了一种新的硬知识蒸馏(HKD)损失,以更好地促进对正常模式的重建。在MvTecAD、VisA和MvTec3D-RGB数据集上进行全面的实验证明,我们的方法达到了最先进的性能水平。

URL

https://arxiv.org/abs/2405.02068

PDF

https://arxiv.org/pdf/2405.02068.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot