Paper Reading AI Learner

The Cambridge RoboMaster: An Agile Multi-Robot Research Platform

2024-05-03 15:54:20
Jan Blumenkamp, Ajay Shankar, Matteo Bettini, Joshua Bird, Amanda Prorok

Abstract

Compact robotic platforms with powerful compute and actuation capabilities are key enablers for practical, real-world deployments of multi-agent research. This article introduces a tightly integrated hardware, control, and simulation software stack on a fleet of holonomic ground robot platforms designed with this motivation. Our robots, a fleet of customised DJI Robomaster S1 vehicles, offer a balance between small robots that do not possess sufficient compute or actuation capabilities and larger robots that are unsuitable for indoor multi-robot tests. They run a modular ROS2-based optimal estimation and control stack for full onboard autonomy, contain ad-hoc peer-to-peer communication infrastructure, and can zero-shot run multi-agent reinforcement learning (MARL) policies trained in our vectorized multi-agent simulation framework. We present an in-depth review of other platforms currently available, showcase new experimental validation of our system's capabilities, and introduce case studies that highlight the versatility and reliabilty of our system as a testbed for a wide range of research demonstrations. Our system as well as supplementary material is available online: this https URL

Abstract (translated)

紧凑型机器人平台具有强大的计算和执行能力是多智能体研究实际应用的关键推动力。本文介绍了一种基于holonomic地面机器人平台的设计,该平台具有强大的计算和执行能力,以实现多智能体研究的实际应用。我们的机器人,是一支由DJI Robomaster S1车辆组成的定制车队,提供小机器人不具备足够的计算或执行能力,以及不适合室内多机器人测试的较大机器人的平衡。它们运行了一个基于ROS2的模块化最优估计和控制栈来实现全车载自主,包含一个自适应的点对点通信基础设施,并且可以通过我们的向量式多智能体仿真框架训练的多智能体强化学习(MARL)策略实现零 shots。我们深入研究了其他可用的平台,展示了我们系统能力的实验验证,并引入了案例研究,突出了我们系统作为各种研究展示平台的多样性和可靠性。我们的系统和补充材料均可在线获取:https://www. this URL

URL

https://arxiv.org/abs/2405.02198

PDF

https://arxiv.org/pdf/2405.02198.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot