Paper Reading AI Learner

Transformer based Pluralistic Image Completion with Reduced Information Loss

2024-03-31 01:20:16
Qiankun Liu, Yuqi Jiang, Zhentao Tan, Dongdong Chen, Ying Fu, Qi Chu, Gang Hua, Nenghai Yu

Abstract

Transformer based methods have achieved great success in image inpainting recently. However, we find that these solutions regard each pixel as a token, thus suffering from an information loss issue from two aspects: 1) They downsample the input image into much lower resolutions for efficiency consideration. 2) They quantize $256^3$ RGB values to a small number (such as 512) of quantized color values. The indices of quantized pixels are used as tokens for the inputs and prediction targets of the transformer. To mitigate these issues, we propose a new transformer based framework called "PUT". Specifically, to avoid input downsampling while maintaining computation efficiency, we design a patch-based auto-encoder P-VQVAE. The encoder converts the masked image into non-overlapped patch tokens and the decoder recovers the masked regions from the inpainted tokens while keeping the unmasked regions unchanged. To eliminate the information loss caused by input quantization, an Un-quantized Transformer is applied. It directly takes features from the P-VQVAE encoder as input without any quantization and only regards the quantized tokens as prediction targets. Furthermore, to make the inpainting process more controllable, we introduce semantic and structural conditions as extra guidance. Extensive experiments show that our method greatly outperforms existing transformer based methods on image fidelity and achieves much higher diversity and better fidelity than state-of-the-art pluralistic inpainting methods on complex large-scale datasets (e.g., ImageNet). Codes are available at this https URL.

Abstract (translated)

基于Transformer的方法在最近在图像修复方面取得了巨大的成功。然而,我们发现这些解决方案将每个像素视为一个标记,从而从两个方面造成了信息损失:1)为了提高效率,它们将输入图像 downscale 到较低的分辨率。2)它们将256个三通道的RGB值量化为数量较小的(如512)个量化颜色值。量化像素的索引被用作Transformer输入和预测目标的数据。为了减轻这些问题,我们提出了一个名为“PUT”的新Transformer框架。具体来说,为了在保持计算效率的同时避免输入下采样,我们设计了一个基于补丁的自动编码器P-VQVAE。编码器将遮罩图像转换为非重叠的补丁标记,解码器从补丁标记中恢复被修复的区域,同时保持未修复的区域不变。为了消除输入量化引起的信息损失,应用了无量化Transformer。它直接将P-VQVAE编码器的特征作为输入,没有任何量化,并且只将量化标记视为预测目标。此外,为了使修复过程更加可控,我们引入了语义和结构条件作为额外的指导。大量实验证明,我们的方法在图像质量方面大大优于现有的Transformer based方法,在复杂的大型数据集(如ImageNet)上实现了更高的多样性和更好的可靠性。代码可在此处下载:https://url.cn/PUT

URL

https://arxiv.org/abs/2404.00513

PDF

https://arxiv.org/pdf/2404.00513.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot