Abstract
Local feature matching is essential for many applications, such as localization and 3D reconstruction. However, it is challenging to match feature points accurately in various camera viewpoints and illumination conditions. In this paper, we propose a framework that robustly extracts and describes salient local features regardless of changing light and viewpoints. The framework suppresses illumination variations and encourages structural information to ignore the noise from light and to focus on edges. We classify the elements in the feature covariance matrix, an implicit feature map information, into two components. Our model extracts feature points from salient regions leading to reduced incorrect matches. In our experiments, the proposed method achieved higher accuracy than the state-of-the-art methods in the public dataset, such as HPatches, Aachen Day-Night, and ETH, which especially show highly variant viewpoints and illumination.
Abstract (translated)
局部特征匹配在许多应用中是至关重要的,例如定位和三维重建。然而,在各种不同的相机视角和照明条件下精确匹配特征点是非常具有挑战性的。在本文中,我们提出了一种框架,可以 robustly 提取和描述特征点,无论光照和视角如何变化。框架抑制了照明变化,并鼓励结构信息忽略光照噪声,专注于边缘。我们将该特征协方差矩阵中的元素分为两个部分。我们的模型从特征突出区域中提取特征点,以减少不正确的匹配。在我们的实验中,该提出的方法比公共数据集上最先进的方法(例如Hpatches、Aachen白天夜晚和ETH)更高的精度。
URL
https://arxiv.org/abs/2301.10413