Paper Reading AI Learner

On Terrain-Aware Locomotion for Legged Robots

2022-12-01 17:39:17
Shamel Fahmi

Abstract

(Simplified Abstract) To accomplish breakthroughs in dynamic whole-body locomotion, legged robots have to be terrain aware. Terrain-Aware Locomotion (TAL) implies that the robot can perceive the terrain with its sensors, and can take decisions based on this information. This thesis presents TAL strategies both from a proprioceptive and an exteroceptive perspective. The strategies are implemented at the level of locomotion planning, control, and state estimation, and using optimization and learning techniques. The first part is on TAL strategies at the Whole-Body Control (WBC) level. We introduce a passive WBC (pWBC) framework that allows the robot to stabilize and walk over challenging terrain while taking into account the terrain geometry (inclination) and friction properties. The pWBC relies on rigid contact assumptions which makes it suitable only for stiff terrain. As a consequence, we introduce Soft Terrain Adaptation aNd Compliance Estimation (STANCE) which is a soft terrain adaptation algorithm that generalizes beyond rigid terrain. The second part of the thesis focuses on vision-based TAL strategies. We present Vision-Based Terrain-Aware Locomotion (ViTAL) which is an online planning strategy that selects the footholds based on the robot capabilities, and the robot pose that maximizes the chances of the robot succeeding in reaching these footholds. ViTAL relies on a set of robot skills that characterizes the capabilities of the robot and its legs. The skills include the robot's ability to assess the terrain's geometry, avoid leg collisions, and avoid reaching kinematic limits. Our strategies are based on optimization and learning methods and are validated on HyQ and HyQReal in simulation and experiment. We show that with the help of these strategies, we can push dynamic legged robots one step closer to being fully autonomous and terrain aware.

Abstract (translated)

URL

https://arxiv.org/abs/2212.00683

PDF

https://arxiv.org/pdf/2212.00683.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot