Paper Reading AI Learner

WHERE-Bot: a Wheel-less Helical-ring Everting Robot Capable of Omnidirectional Locomotion

2025-03-10 12:30:23
Siyuan Feng, Dengfeng Yan, Jin Liu, Haotong Han, Alexandra K\"uhl, Shuguang Li

Abstract

Compared to conventional wheeled transportation systems designed for flat surfaces, soft robots exhibit exceptional adaptability to various terrains, enabling stable movement in complex environments. However, due to the risk of collision with obstacles and barriers, most soft robots rely on sensors for navigation in unstructured environments with uncertain boundaries. In this work, we present the WHERE-Bot, a wheel-less everting soft robot capable of omnidirectional locomotion. Our WHERE-Bot can navigate through unstructured environments by leveraging its structural and motion advantages rather than relying on sensors for boundary detection. By configuring a spring toy ``Slinky'' into a loop shape, the WHERE-Bot performs multiple rotational motions: spiral-rotating along the hub circumference, self-rotating around the hub's center, and orbiting around a certain point. The robot's trajectories can be reprogrammed by actively altering its mass distribution. The WHERE-Bot shows significant potential for boundary exploration in unstructured environments.

Abstract (translated)

与专为平坦表面设计的传统轮式运输系统相比,软体机器人在各种地形中表现出卓越的适应性,能够在复杂环境中实现稳定移动。然而,由于存在与障碍物和屏障碰撞的风险,大多数软体机器人依赖传感器进行未结构化环境中的导航,在这种环境下边界是不确定的。在这项工作中,我们介绍了WHERE-Bot,这是一种无轮、能够全方位运动的翻卷式软体机器人。我们的WHERE-Bot能够在没有依靠传感器检测边界的前提下,通过利用其独特的结构和运动优势来穿越未结构化的环境进行导航。 通过将弹簧玩具“Slinky”配置成环形,WHERE-Bot可以执行多种旋转动作:沿着中心轮缘螺旋旋转、围绕中心点自转以及绕某个定点公转。该机器人的轨迹可以通过主动改变其质量分布而重新编程。WHERE-Bot在未结构化环境中进行边界探索方面展现出巨大的潜力。

URL

https://arxiv.org/abs/2503.07245

PDF

https://arxiv.org/pdf/2503.07245.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot