Abstract
Existing deep learning-based image inpainting methods typically rely on convolutional networks with RGB images to reconstruct images. However, relying exclusively on RGB images may neglect important depth information, which plays a critical role in understanding the spatial and structural context of a scene. Just as human vision leverages stereo cues to perceive depth, incorporating depth maps into the inpainting process can enhance the model's ability to reconstruct images with greater accuracy and contextual awareness. In this paper, we propose a novel approach that incorporates both RGB and depth images for enhanced image inpainting. Our models employ a dual encoder architecture, where one encoder processes the RGB image and the other handles the depth image. The encoded features from both encoders are then fused in the decoder using an attention mechanism, effectively integrating the RGB and depth representations. We use two different masking strategies, line and square, to test the robustness of the model under different types of occlusions. To further analyze the effectiveness of our approach, we use Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations to examine the regions of interest the model focuses on during inpainting. We show that incorporating depth information alongside the RGB image significantly improves the reconstruction quality. Through both qualitative and quantitative comparisons, we demonstrate that the depth-integrated model outperforms the baseline, with attention mechanisms further enhancing inpainting performance, as evidenced by multiple evaluation metrics and visualization.
Abstract (translated)
现有的基于深度学习的图像修复方法通常依赖于使用RGB图像的卷积网络来重建图像。然而,单纯依靠RGB图像可能会忽视重要的深度信息,这种信息在理解场景的空间和结构上下文中起着关键作用。就像人类视觉利用立体线索感知深度一样,在图像修复过程中引入深度图可以增强模型对不同遮挡类型下图像进行更精确重构的能力,并提升其对背景环境的理解能力。在这篇论文中,我们提出了一种新颖的方法,该方法结合了RGB和深度图以实现改进的图像修复效果。我们的模型采用双编码器架构:一个编码器处理RGB图像,另一个处理深度图。两个编码器提取到的特征随后在解码器阶段通过注意力机制进行融合,从而有效地整合了RGB与深度表示之间的信息。我们采用了两种不同的掩膜策略——线形和方形掩膜——来测试模型在不同遮挡类型下的鲁棒性。为了进一步分析我们的方法的有效性,我们使用梯度加权类激活映射(Grad-CAM)可视化技术来观察模型在图像修复过程中关注的感兴趣区域。研究表明,在处理RGB图像的同时结合深度信息显著提升了重建的质量。通过定性和定量比较,我们证明了融合深度信息的模型优于基准模型,并且注意力机制进一步提高了图像修复的表现力,这一点从多种评估指标和可视化效果中得到了证实。
URL
https://arxiv.org/abs/2505.00735