Abstract
We introduce, XoFTR, a cross-modal cross-view method for local feature matching between thermal infrared (TIR) and visible images. Unlike visible images, TIR images are less susceptible to adverse lighting and weather conditions but present difficulties in matching due to significant texture and intensity differences. Current hand-crafted and learning-based methods for visible-TIR matching fall short in handling viewpoint, scale, and texture diversities. To address this, XoFTR incorporates masked image modeling pre-training and fine-tuning with pseudo-thermal image augmentation to handle the modality differences. Additionally, we introduce a refined matching pipeline that adjusts for scale discrepancies and enhances match reliability through sub-pixel level refinement. To validate our approach, we collect a comprehensive visible-thermal dataset, and show that our method outperforms existing methods on many benchmarks.
Abstract (translated)
我们提出了XoFTR,一种跨模态跨视图方法,用于热红外(TIR)和可见图像之间的局部特征匹配。与可见图像不同,TIR图像对不利照明和天气条件下的鲁棒性较低,但由于纹理和强度差异,匹配存在困难。目前的手工定制和学习方法在处理视点、尺寸和纹理多样性方面存在不足。为了解决这个问题,XoFTR通过带伪热图像增强的掩膜图像建模预训练和微调来处理模态差异。此外,我们引入了一个平滑匹配管道,通过亚像素级别的细化来调整规模差异并提高匹配可靠性。为了验证我们的方法,我们收集了一个完整的可见-热数据集,并表明我们的方法在许多基准测试中都优于现有方法。
URL
https://arxiv.org/abs/2404.09692