Paper Reading AI Learner

SPONGE: Open-Source Designs of Modular Articulated Soft Robots

2024-04-16 17:06:40
Tim-Lukas Habich, Jonas Haack, Mehdi Belhadj, Dustin Lehmann, Thomas Seel, Moritz Schappler

Abstract

Soft-robot designs are manifold, but only a few are publicly available. Often, these are only briefly described in their publications. This complicates reproduction, and hinders the reproducibility and comparability of research results. If the designs were uniform and open source, validating researched methods on real benchmark systems would be possible. To address this, we present two variants of a soft pneumatic robot with antagonistic bellows as open source. Starting from a semi-modular design with multiple cables and tubes routed through the robot body, the transition to a fully modular robot with integrated microvalves and serial communication is highlighted. Modularity in terms of stackability, actuation, and communication is achieved, which is the crucial requirement for building soft robots with many degrees of freedom and high dexterity for real-world tasks. Both systems are compared regarding their respective advantages and disadvantages. The robots' functionality is demonstrated in experiments on airtightness, gravitational influence, position control with mean tracking errors of <3 deg, and long-term operation of cast and printed bellows. All soft- and hardware files required for reproduction are provided.

Abstract (translated)

软机器人设计多种多样,但只有少数公开可用。通常,这些设计只是在发表文章时简要描述一下。这使得复制很复杂,同时也阻碍了研究结果的可重复性和可比性。如果设计是统一的且开源的,那么在实际基准系统上验证研究方法是可能的。为解决这个问题,我们提出了两个带有对抗性活塞的软气动机器人开源设计。设计从具有多个电缆和管道的半模块设计开始,重点转向具有集成微活塞和串行通信的全模块机器人。在软机器人,通过堆叠、执行和通信实现模块性,这是构建具有多自由度和高灵巧性的软机器人的关键要求。这两个系统在各自的优缺点方面进行了比较。我们通过测试空气密封性、重力影响、带有<3°平均跟踪误差的位置控制以及软管和打印活塞的长期操作,展示了机器人的功能性。所有用于复制的软硬件文件都提供。

URL

https://arxiv.org/abs/2404.10734

PDF

https://arxiv.org/pdf/2404.10734.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot