Paper Reading AI Learner

Edge Detection based on Channel Attention and Inter-region Independence Test

2025-05-02 06:30:21
Ru-yu Yan, Da-Qing Zhang

Abstract

Existing edge detection methods often suffer from noise amplification and excessive retention of non-salient details, limiting their applicability in high-precision industrial scenarios. To address these challenges, we propose CAM-EDIT, a novel framework that integrates Channel Attention Mechanism (CAM) and Edge Detection via Independence Testing (EDIT). The CAM module adaptively enhances discriminative edge features through multi-channel fusion, while the EDIT module employs region-wise statistical independence analysis (using Fisher's exact test and chi-square test) to suppress uncorrelated this http URL experiments on BSDS500 and NYUDv2 datasets demonstrate state-of-the-art performance. Among the nine comparison algorithms, the F-measure scores of CAM-EDIT are 0.635 and 0.460, representing improvements of 19.2\% to 26.5\% over traditional methods (Canny, CannySR), and better than the latest learning based methods (TIP2020, MSCNGP). Noise robustness evaluations further reveal a 2.2\% PSNR improvement under Gaussian noise compared to baseline methods. Qualitative results exhibit cleaner edge maps with reduced artifacts, demonstrating its potential for high-precision industrial applications.

Abstract (translated)

现有的边缘检测方法常常会放大噪声,并过度保留不重要的细节,这限制了它们在高精度工业场景中的应用。为了应对这些挑战,我们提出了CAM-EDIT这一新框架,该框架结合了通道注意机制(Channel Attention Mechanism, CAM)和基于独立性测试的边缘检测(Edge Detection via Independence Testing, EDIT)。其中,CAM模块通过多通道融合自适应增强辨识度高的边缘特征,而EDIT模块则利用区域统计独立性分析(采用费舍尔精确检验和卡方检验)来抑制不相关的噪声。在BSDS500和NYUDv2数据集上的实验表明,该框架达到了最先进的性能水平。 与九种对比算法相比,CAM-EDIT的F-measure分数分别为0.635和0.460,在传统方法(如Canny, CannySR)的基础上分别提高了19.2%至26.5%,优于最新的基于学习的方法(如TIP2020, MSCNGP)。噪声鲁棒性评估进一步显示,在高斯噪声下,CAM-EDIT相比基线方法PSNR值提升了2.2%。定性的结果显示,边缘图更清晰且减少了伪影,表明它在高精度工业应用中具有巨大潜力。

URL

https://arxiv.org/abs/2505.01040

PDF

https://arxiv.org/pdf/2505.01040.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot