Paper Reading AI Learner

Importance of Disjoint Sampling in Conventional and Transformer Models for Hyperspectral Image Classification

2024-04-23 11:40:52
Muhammad Ahmad, Manuel Mazzara, Salvatore Distifano

Abstract

Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art (SOTA) models. When training, validation, and test sets overlap or share data, it introduces a bias that inflates performance metrics and prevents accurate assessment of a model's true ability to generalize to new examples. This paper presents an innovative disjoint sampling approach for training SOTA models on Hyperspectral image classification (HSIC) tasks. By separating training, validation, and test data without overlap, the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation. Experiments demonstrate the approach significantly improves a model's generalization compared to alternatives that include training and validation data in test data. By eliminating data leakage between sets, disjoint sampling provides reliable metrics for benchmarking progress in HSIC. Researchers can have confidence that reported performance truly reflects a model's capabilities for classifying new scenes, not just memorized pixels. This rigorous methodology is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors. The source code is available at this https URL.

Abstract (translated)

离散采样对于准确和无偏见地评估最先进的(SOTA)模型至关重要。当训练集、验证集和测试集不重叠或共享数据时,它引入了偏差,导致性能指标膨胀,并阻止了对模型在为新实例上进行准确评估。本文提出了一种创新性的离散采样方法,用于在 Hyperspectral image classification (HSIC) 任务上训练 SOTA 模型。通过分离训练集、验证集和测试集,所提出的方法有助于更公平地评估模型在训练集或验证集上从未暴露过的像素的分类能力。实验证明,与包括训练和验证数据在测试集中的替代方法相比,该方法显著提高了模型的泛化能力。通过消除数据集之间的泄漏,离散采样为基于HSIC 的基准测试提供了可靠的度量。研究人员可以放心地相信,所报告的性能反映了模型对分类新场景的能力,而不仅仅是记忆中的像素。这种严谨的方法对于推动 SOTA 模型及其在大型地图应用中的实际应用至关重要。源代码可在此处访问:https://www.osgeo.org/。

URL

https://arxiv.org/abs/2404.14944

PDF

https://arxiv.org/pdf/2404.14944.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot