Abstract
We present a method for finding cross-modal space-time correspondences. Given two images from different visual modalities, such as an RGB image and a depth map, our model identifies which pairs of pixels correspond to the same physical points in the scene. To solve this problem, we extend the contrastive random walk framework to simultaneously learn cycle-consistent feature representations for both cross-modal and intra-modal matching. The resulting model is simple and has no explicit photo-consistency assumptions. It can be trained entirely using unlabeled data, without the need for any spatially aligned multimodal image pairs. We evaluate our method on both geometric and semantic correspondence tasks. For geometric matching, we consider challenging tasks such as RGB-to-depth and RGB-to-thermal matching (and vice versa); for semantic matching, we evaluate on photo-sketch and cross-style image alignment. Our method achieves strong performance across all benchmarks.
Abstract (translated)
我们提出了一种寻找跨模态空间时间对应的方法。给定两个来自不同视觉模态的图像,例如RGB图像和深度图,我们的模型能够识别哪些像素对在场景中代表相同的物理点。为了解决这个问题,我们将对比随机行走框架扩展到同时学习跨模态和内模态匹配的循环一致特征表示。该方法简单且没有显式的照片一致性假设,并且可以完全使用未标记的数据进行训练,而无需任何空间对齐的多模式图像对。 我们在几何对应和语义对应的任务上评估了我们的方法。对于几何匹配,我们考虑诸如RGB到深度图以及RGB到热成像(反之亦然)等具有挑战性的任务;对于语义匹配,我们测试了照片素描和跨风格图像对齐的性能。我们的方法在所有基准测试中都表现出色。
URL
https://arxiv.org/abs/2506.03148