Paper Reading AI Learner

Restore Anything with Masks: Leveraging Mask Image Modeling for Blind All-in-One Image Restoration

2024-09-28 16:33:43
Chu-Jie Qin, Rui-Qi Wu, Zikun Liu, Xin Lin, Chun-Le Guo, Hyun Hee Park, Chongyi Li

Abstract

All-in-one image restoration aims to handle multiple degradation types using one model. This paper proposes a simple pipeline for all-in-one blind image restoration to Restore Anything with Masks (RAM). We focus on the image content by utilizing Mask Image Modeling to extract intrinsic image information rather than distinguishing degradation types like other methods. Our pipeline consists of two stages: masked image pre-training and fine-tuning with mask attribute conductance. We design a straightforward masking pre-training approach specifically tailored for all-in-one image restoration. This approach enhances networks to prioritize the extraction of image content priors from various degradations, resulting in a more balanced performance across different restoration tasks and achieving stronger overall results. To bridge the gap of input integrity while preserving learned image priors as much as possible, we selectively fine-tuned a small portion of the layers. Specifically, the importance of each layer is ranked by the proposed Mask Attribute Conductance (MAC), and the layers with higher contributions are selected for finetuning. Extensive experiments demonstrate that our method achieves state-of-the-art performance. Our code and model will be released at \href{this https URL}{this https URL}.

Abstract (translated)

集成图像修复的通用的图像修复旨在使用一个模型处理多种退化类型。本文提出了一种简单的集成图像修复通用的全功能图像修复方法:使用遮罩图像建模来提取固有图像信息,而不是像其他方法那样区分退化类型。我们的管道包括两个阶段:遮罩图像预训练和带标签的微调。我们专门为全功能图像修复设计了一种直接针对所有集成图像修复的遮罩预训练方法。这种方法通过增强网络在各种退化中提取图像内容 prior,从而在不同的修复任务上实现更平衡的性能,并实现更强的整体结果。为了在保留学习到的图像先验的同时尽可能地修复输入完整性,我们选择性地微调了部分层。具体来说,我们通过提出的遮罩属性导数(MAC)对每个层的重要性进行排名,并对具有更高贡献的层进行微调。 extensive实验证明,我们的方法实现了最先进的性能。我们的代码和模型将公开发布在\href{this <https://this URL>}{this <https://this URL>}。

URL

https://arxiv.org/abs/2409.19403

PDF

https://arxiv.org/pdf/2409.19403.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot