Paper Reading AI Learner

Active Modular Environment for Robot Navigation

2021-02-25 09:23:17
Shota Kameyama, Keisuke Okumura, Yasumasa Tamura, Xavier Défago

Abstract

This paper presents a novel robot-environment interaction in navigation tasks such that robots have neither a representation of their working space nor planning function, instead, an active environment takes charge of these aspects. This is realized by spatially deploying computing units, called cells, and making cells manage traffic in their respective physical region. Different from stigmegic approaches, cells interact with each other to manage environmental information and to construct instructions on how robots move. As a proof-of-concept, we present an architecture called AFADA and its prototype, consisting of modular cells and robots moving on the cells. The instructions from cells are based on a distributed routing algorithm and a reservation protocol. We demonstrate that AFADA achieves efficient robot moves for single-robot navigation in a dynamic environment changing its topology with a stochastic model, comparing to self-navigation by a robot itself. This is followed by several demos, including multi-robot navigation, highlighting the power of offloading both representation and planning from robots to the environment. We expect that the concept of AFADA contributes to developing the infrastructure for multiple robots because it can engage online and lifelong planning and execution.

Abstract (translated)

URL

https://arxiv.org/abs/2102.12748

PDF

https://arxiv.org/pdf/2102.12748.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot