Abstract
Transformer based methods have achieved great success in image inpainting recently. However, we find that these solutions regard each pixel as a token, thus suffering from an information loss issue from two aspects: 1) They downsample the input image into much lower resolutions for efficiency consideration. 2) They quantize $256^3$ RGB values to a small number (such as 512) of quantized color values. The indices of quantized pixels are used as tokens for the inputs and prediction targets of the transformer. To mitigate these issues, we propose a new transformer based framework called "PUT". Specifically, to avoid input downsampling while maintaining computation efficiency, we design a patch-based auto-encoder P-VQVAE. The encoder converts the masked image into non-overlapped patch tokens and the decoder recovers the masked regions from the inpainted tokens while keeping the unmasked regions unchanged. To eliminate the information loss caused by input quantization, an Un-quantized Transformer is applied. It directly takes features from the P-VQVAE encoder as input without any quantization and only regards the quantized tokens as prediction targets. Furthermore, to make the inpainting process more controllable, we introduce semantic and structural conditions as extra guidance. Extensive experiments show that our method greatly outperforms existing transformer based methods on image fidelity and achieves much higher diversity and better fidelity than state-of-the-art pluralistic inpainting methods on complex large-scale datasets (e.g., ImageNet). Codes are available at this https URL.
Abstract (translated)
基于Transformer的方法在最近在图像修复方面取得了巨大的成功。然而,我们发现这些解决方案将每个像素视为一个标记,从而从两个方面造成了信息损失:1)为了提高效率,它们将输入图像 downscale 到较低的分辨率。2)它们将256个三通道的RGB值量化为数量较小的(如512)个量化颜色值。量化像素的索引被用作Transformer输入和预测目标的数据。为了减轻这些问题,我们提出了一个名为“PUT”的新Transformer框架。具体来说,为了在保持计算效率的同时避免输入下采样,我们设计了一个基于补丁的自动编码器P-VQVAE。编码器将遮罩图像转换为非重叠的补丁标记,解码器从补丁标记中恢复被修复的区域,同时保持未修复的区域不变。为了消除输入量化引起的信息损失,应用了无量化Transformer。它直接将P-VQVAE编码器的特征作为输入,没有任何量化,并且只将量化标记视为预测目标。此外,为了使修复过程更加可控,我们引入了语义和结构条件作为额外的指导。大量实验证明,我们的方法在图像质量方面大大优于现有的Transformer based方法,在复杂的大型数据集(如ImageNet)上实现了更高的多样性和更好的可靠性。代码可在此处下载:https://url.cn/PUT
URL
https://arxiv.org/abs/2404.00513