Paper Reading AI Learner

Sampling Innovation-Based Adaptive Compressive Sensing

2025-03-17 14:54:13
Zhifu Tian, Tao Hu, Chaoyang Niu, Di Wu, Shu Wang

Abstract

Scene-aware Adaptive Compressive Sensing (ACS) has attracted significant interest due to its promising capability for efficient and high-fidelity acquisition of scene images. ACS typically prescribes adaptive sampling allocation (ASA) based on previous samples in the absence of ground truth. However, when confronting unknown scenes, existing ACS methods often lack accurate judgment and robust feedback mechanisms for ASA, thus limiting the high-fidelity sensing of the scene. In this paper, we introduce a Sampling Innovation-Based ACS (SIB-ACS) method that can effectively identify and allocate sampling to challenging image reconstruction areas, culminating in high-fidelity image reconstruction. An innovation criterion is proposed to judge ASA by predicting the decrease in image reconstruction error attributable to sampling increments, thereby directing more samples towards regions where the reconstruction error diminishes significantly. A sampling innovation-guided multi-stage adaptive sampling (AS) framework is proposed, which iteratively refines the ASA through a multi-stage feedback process. For image reconstruction, we propose a Principal Component Compressed Domain Network (PCCD-Net), which efficiently and faithfully reconstructs images under AS scenarios. Extensive experiments demonstrate that the proposed SIB-ACS method significantly outperforms the state-of-the-art methods in terms of image reconstruction fidelity and visual effects. Codes are available at this https URL.

Abstract (translated)

场景感知自适应压缩传感(ACS)因其能够高效且高质量地获取场景图像的潜力而引起了广泛的关注。在没有真实情况数据的情况下,ACS通常基于先前样本规定自适应采样分配(ASA)。然而,在面对未知场景时,现有的ACS方法往往难以准确判断并缺乏稳健的反馈机制来改进ASA,从而限制了对场景的高保真度感知。 本文介绍了一种基于采样创新的ACS(SIB-ACS)方法,该方法能够有效识别和优先分配采样到那些具有挑战性的图像重建区域上,最终实现高质量的图像重建。文中提出一个创新标准来判断ASA,通过预测由于增加采样而导致的图像重建误差减少程度,从而将更多的样本集中于重建误差显著降低的区域。 同时提出了基于采样创新引导的多阶段自适应采样(AS)框架,在这个框架下,通过多次迭代反馈过程不断优化ASA。在图像重建方面,我们提出了一种主成分压缩域网络(PCCD-Net),该网络能够高效且准确地在自适应采样的场景中重建图像。 广泛的实验表明,所提出的SIB-ACS方法在图像重建保真度和视觉效果上均显著优于当前最先进的方法。代码可在提供的链接处获取。

URL

https://arxiv.org/abs/2503.13241

PDF

https://arxiv.org/pdf/2503.13241.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot