Paper Reading AI Learner

The Effectiveness of Edge Detection Evaluation Metrics for Automated Coastline Detection

2024-05-19 09:51:10
Conor O'Sullivan, Seamus Coveney, Xavier Monteys, Soumyabrata Dev

Abstract

We analyse the effectiveness of RMSE, PSNR, SSIM and FOM for evaluating edge detection algorithms used for automated coastline detection. Typically, the accuracy of detected coastlines is assessed visually. This can be impractical on a large scale leading to the need for objective evaluation metrics. Hence, we conduct an experiment to find reliable metrics. We apply Canny edge detection to 95 coastline satellite images across 49 testing locations. We vary the Hysteresis thresholds and compare metric values to a visual analysis of detected edges. We found that FOM was the most reliable metric for selecting the best threshold. It could select a better threshold 92.6% of the time and the best threshold 66.3% of the time. This is compared RMSE, PSNR and SSIM which could select the best threshold 6.3%, 6.3% and 11.6% of the time respectively. We provide a reason for these results by reformulating RMSE, PSNR and SSIM in terms of confusion matrix measures. This suggests these metrics not only fail for this experiment but are not useful for evaluating edge detection in general.

Abstract (translated)

我们分析RMSE、PSNR、SSIM和FOM在评估用于自动海岸线检测的边缘检测算法的有效性。通常,检测到的海岸线的准确性是通过视觉评估的。在大型范围内这可能是不可行的,因此需要使用客观评估指标。因此,我们进行了一项实验,以找到可靠的指标。我们将Canny边缘检测应用于49个测试地点的95个海岸线卫星图像。我们变化辉度阈值并比较指标值与通过视觉分析检测到的边缘的比较。我们发现,FOM是最可靠的指标,用于选择最佳阈值。它可以在92.6%的时间内选择更好的阈值,而在66.3%的时间内选择最佳阈值。这相当于RMSE,PSNR和SSIM,它们分别可以在6.3%,6.3%和11.6%的时间内选择最佳阈值。我们通过重新定义RMSE、PSNR和SSIM,将其转化为混淆矩阵衡量指标,来解释这些结果。这表明,这些指标不仅在这个实验中失败,而且对于评估边缘检测通常没有用处。

URL

https://arxiv.org/abs/2405.11498

PDF

https://arxiv.org/pdf/2405.11498.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot