Paper Reading AI Learner

Generalized Object Search

2023-01-24 16:41:36
Kaiyu Zheng

Abstract

Future collaborative robots must be capable of finding objects. As such a fundamental skill, we expect object search to eventually become an off-the-shelf capability for any robot, similar to e.g., object detection, SLAM, and motion planning. However, existing approaches either make unrealistic compromises (e.g., reduce the problem from 3D to 2D), resort to ad-hoc, greedy search strategies, or attempt to learn end-to-end policies in simulation that are yet to generalize across real robots and environments. This thesis argues that through using Partially Observable Markov Decision Processes (POMDPs) to model object search while exploiting structures in the human world (e.g., octrees, correlations) and in human-robot interaction (e.g., spatial language), a practical and effective system for generalized object search can be achieved. In support of this argument, I develop methods and systems for (multi-)object search in 3D environments under uncertainty due to limited field of view, occlusion, noisy, unreliable detectors, spatial correlations between objects, and possibly ambiguous spatial language (e.g., "The red car is behind Chase Bank"). Besides evaluation in simulators such as PyGame, AirSim, and AI2-THOR, I design and implement a robot-independent, environment-agnostic system for generalized object search in 3D and deploy it on the Boston Dynamics Spot robot, the Kinova MOVO robot, and the Universal Robots UR5e robotic arm, to perform object search in different environments. The system enables, for example, a Spot robot to find a toy cat hidden underneath a couch in a kitchen area in under one minute. This thesis also broadly surveys the object search literature, proposing taxonomies in object search problem settings, methods and systems.

Abstract (translated)

未来的协作机器人必须能够找到物品。作为一种基本技能,我们期望物品搜索最终成为任何机器人的内置能力,类似于物体检测、单点登录和运动规划。然而,现有的方法要么做出不切实际的妥协(例如,将问题从3D降低到2D),要么采用突发的、贪婪的搜索策略,或者尝试在模拟中学习尚未在真实机器人和环境之间普遍适用 end-to-end 策略。本论文认为,通过利用部分可见的马尔可夫决策过程(POMDPs)来模拟物品搜索,同时利用人类世界(例如,octrees,相关)和机器人-人类交互(例如,空间语言)中的结构,可以实现一种实用的、有效的物品搜索系统。为支持这一观点,我在PyGame、AirSim和AI2-THOR等模拟器中评估了方法和系统,设计了并实现了一个机器人独立的、环境无关的物品搜索系统,并将其部署到波士顿动力的Spot机器人、 Kinova MOVO机器人和通用机器人UR5e机器人手臂上,用于在不同环境中进行物品搜索。该系统使Spot机器人能够在不到一分钟的时间内在厨房里的沙发上找到一只隐藏的玩具猫。本论文还广泛综述了物品搜索文献,提出了物品搜索问题设置、方法和系统的分类。

URL

https://arxiv.org/abs/2301.10121

PDF

https://arxiv.org/pdf/2301.10121.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot