Abstract
In vision-based robot localization and SLAM, Visual Place Recognition (VPR) is essential. This paper addresses the problem of VPR, which involves accurately recognizing the location corresponding to a given query image. A popular approach to vision-based place recognition relies on low-level visual features. Despite significant progress in recent years, place recognition based on low-level visual features is challenging when there are changes in scene appearance. To address this, end-to-end training approaches have been proposed to overcome the limitations of hand-crafted features. However, these approaches still fail under drastic changes and require large amounts of labeled data to train models, presenting a significant limitation. Methods that leverage high-level semantic information, such as objects or categories, have been proposed to handle variations in appearance. In this paper, we introduce a novel VPR approach that remains robust to scene changes and does not require additional training. Our method constructs semantic image descriptors by extracting pixel-level embeddings using a zero-shot, language-driven semantic segmentation model. We validate our approach in challenging place recognition scenarios using real-world public dataset. The experiments demonstrate that our method outperforms non-learned image representation techniques and off-the-shelf convolutional neural network (CNN) descriptors. Our code is available at https: //github.com/woo-soojin/context-based-vlpr.
Abstract (translated)
在基于视觉的机器人定位和SLAM中,视觉地点识别(VPR)是至关重要的。本文解决了VPR的问题,该问题涉及准确地识别与给定查询图像相对应的位置。一种流行的视觉地点识别方法依赖于低级视觉特征。尽管近年来取得了显著进展,但基于低级视觉特征的地点识别在场景外观变化时仍然具有挑战性。为了解决这一问题,提出了端到端训练的方法来克服手工设计特征的局限性。然而,这些方法在面对巨大变化时仍会失败,并且需要大量标记数据来训练模型,这构成了一个重要限制。利用高级语义信息(如物体或类别)的方法已被提出以处理外观的变化。本文中,我们介绍了一种新颖的VPR方法,该方法能够抵御场景变化并不需要额外训练。我们的方法通过使用零样本、语言驱动的语义分割模型提取像素级嵌入来构建语义图像描述符。我们在具有挑战性的地点识别场景中使用真实世界公开数据集验证了我们的方法。实验表明,我们的方法优于非学习型图像表示技术和现成的卷积神经网络(CNN)描述符。我们的代码可在 https://github.com/woo-soojin/context-based-vlpr 获取。
URL
https://arxiv.org/abs/2410.19341