Paper Reading AI Learner

Context-Based Visual-Language Place Recognition

2024-10-25 06:59:11
Soojin Woo, Seong-Woo Kim

Abstract

In vision-based robot localization and SLAM, Visual Place Recognition (VPR) is essential. This paper addresses the problem of VPR, which involves accurately recognizing the location corresponding to a given query image. A popular approach to vision-based place recognition relies on low-level visual features. Despite significant progress in recent years, place recognition based on low-level visual features is challenging when there are changes in scene appearance. To address this, end-to-end training approaches have been proposed to overcome the limitations of hand-crafted features. However, these approaches still fail under drastic changes and require large amounts of labeled data to train models, presenting a significant limitation. Methods that leverage high-level semantic information, such as objects or categories, have been proposed to handle variations in appearance. In this paper, we introduce a novel VPR approach that remains robust to scene changes and does not require additional training. Our method constructs semantic image descriptors by extracting pixel-level embeddings using a zero-shot, language-driven semantic segmentation model. We validate our approach in challenging place recognition scenarios using real-world public dataset. The experiments demonstrate that our method outperforms non-learned image representation techniques and off-the-shelf convolutional neural network (CNN) descriptors. Our code is available at https: //github.com/woo-soojin/context-based-vlpr.

Abstract (translated)

在基于视觉的机器人定位和SLAM中,视觉地点识别(VPR)是至关重要的。本文解决了VPR的问题,该问题涉及准确地识别与给定查询图像相对应的位置。一种流行的视觉地点识别方法依赖于低级视觉特征。尽管近年来取得了显著进展,但基于低级视觉特征的地点识别在场景外观变化时仍然具有挑战性。为了解决这一问题,提出了端到端训练的方法来克服手工设计特征的局限性。然而,这些方法在面对巨大变化时仍会失败,并且需要大量标记数据来训练模型,这构成了一个重要限制。利用高级语义信息(如物体或类别)的方法已被提出以处理外观的变化。本文中,我们介绍了一种新颖的VPR方法,该方法能够抵御场景变化并不需要额外训练。我们的方法通过使用零样本、语言驱动的语义分割模型提取像素级嵌入来构建语义图像描述符。我们在具有挑战性的地点识别场景中使用真实世界公开数据集验证了我们的方法。实验表明,我们的方法优于非学习型图像表示技术和现成的卷积神经网络(CNN)描述符。我们的代码可在 https://github.com/woo-soojin/context-based-vlpr 获取。

URL

https://arxiv.org/abs/2410.19341

PDF

https://arxiv.org/pdf/2410.19341.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot