Abstract
The assumption of a static environment is common in many geometric computer vision tasks like SLAM but limits their applicability in highly dynamic scenes. Since these tasks rely on identifying point correspondences between input images within the static part of the environment, we propose a graph neural network-based sparse feature matching network designed to perform robust matching under challenging conditions while excluding keypoints on moving objects. We employ a similar scheme of attentional aggregation over graph edges to enhance keypoint representations as state-of-the-art feature-matching networks but augment the graph with epipolar and temporal information and vastly reduce the number of graph edges. Furthermore, we introduce a self-supervised training scheme to extract pseudo labels for image pairs in dynamic environments from exclusively unprocessed visual-inertial data. A series of experiments show the superior performance of our network as it excludes keypoints on moving objects compared to state-of-the-art feature matching networks while still achieving similar results regarding conventional matching metrics. When integrated into a SLAM system, our network significantly improves performance, especially in highly dynamic scenes.
Abstract (translated)
在许多几何计算机视觉任务中,如SLAM,静态环境的假设是很常见的,但它限制了这些任务在高度动态场景中的适用性。由于这些任务依赖于在静态环境中确定输入图像之间的点对应关系,我们提出了一个基于图神经网络的稀疏特征匹配网络,旨在在具有挑战性的条件下实现鲁棒匹配,同时排除运动物体上的关键点。我们在图边上采用类似的注意力和聚合方案来增强关键点表示,与最先进的特征匹配网络类似,但补充了极化的图信息和大大减少了图的边数。此外,我们还引入了一种自监督训练方案,用于从仅处理视觉-inertial数据的动态环境中提取伪标签,用于图像对。一系列实验证明,与最先进的特征匹配网络相比,我们的网络在排除运动物体关键点的同时,仍然实现了与传统匹配指标类似的结果。当集成到SLAM系统中时,我们的网络在动态场景中的性能显著提高,尤其是在高度动态场景中。
URL
https://arxiv.org/abs/2403.11370