Paper Reading AI Learner

Equivariant Imaging for Self-supervised Hyperspectral Image Inpainting

2024-04-19 19:55:15
Shuo Li, Mike Davies, Mehrdad Yaghoobi

Abstract

Hyperspectral imaging (HSI) is a key technology for earth observation, surveillance, medical imaging and diagnostics, astronomy and space exploration. The conventional technology for HSI in remote sensing applications is based on the push-broom scanning approach in which the camera records the spectral image of a stripe of the scene at a time, while the image is generated by the aggregation of measurements through time. In real-world airborne and spaceborne HSI instruments, some empty stripes would appear at certain locations, because platforms do not always maintain a constant programmed attitude, or have access to accurate digital elevation maps (DEM), and the travelling track is not necessarily aligned with the hyperspectral cameras at all times. This makes the enhancement of the acquired HS images from incomplete or corrupted observations an essential task. We introduce a novel HSI inpainting algorithm here, called Hyperspectral Equivariant Imaging (Hyper-EI). Hyper-EI is a self-supervised learning-based method which does not require training on extensive datasets or access to a pre-trained model. Experimental results show that the proposed method achieves state-of-the-art inpainting performance compared to the existing methods.

Abstract (translated)

hyperspectral imaging(HSI)是一种用于地球观测、监控、医学成像和诊断、天文学和太空探索的关键技术。在遥感应用中,传统的HSI技术是基于扫描方法,即相机在一次拍摄中记录场景的条带光谱图像,然后通过时间累积测量结果来生成图像。在实际的航空和航天器HSI仪器中,在某些位置会看到一些空条带,因为平台并不总是保持恒定的程序化姿态,或者无法访问准确的数字高程图(DEM),并且飞行轨迹不一定与所有时刻的 hyperspectral 相机对齐。这使得从 incomplete 或 corrupted observations 中增强已获得 HS 图像成为一个必要任务。我们在这里介绍了一种名为 Hyperpectral Equivariant Imaging(Hyper-EI)的新型HSI修复算法。Hyper-EI是一种自监督学习方法,不需要在大量数据集上进行训练或访问预训练模型。实验结果表明,与现有方法相比,所提出的方法在修复效果方面实现了最先进的水平。

URL

https://arxiv.org/abs/2404.13159

PDF

https://arxiv.org/pdf/2404.13159.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot