Paper Reading AI Learner

ScrewMimic: Bimanual Imitation from Human Videos with Screw Space Projection

2024-05-06 17:43:34
Arpit Bahety, Priyanka Mandikal, Ben Abbatematteo, Roberto Martín-Martín

Abstract

Bimanual manipulation is a longstanding challenge in robotics due to the large number of degrees of freedom and the strict spatial and temporal synchronization required to generate meaningful behavior. Humans learn bimanual manipulation skills by watching other humans and by refining their abilities through play. In this work, we aim to enable robots to learn bimanual manipulation behaviors from human video demonstrations and fine-tune them through interaction. Inspired by seminal work in psychology and biomechanics, we propose modeling the interaction between two hands as a serial kinematic linkage -- as a screw motion, in particular, that we use to define a new action space for bimanual manipulation: screw actions. We introduce ScrewMimic, a framework that leverages this novel action representation to facilitate learning from human demonstration and self-supervised policy fine-tuning. Our experiments demonstrate that ScrewMimic is able to learn several complex bimanual behaviors from a single human video demonstration, and that it outperforms baselines that interpret demonstrations and fine-tune directly in the original space of motion of both arms. For more information and video results, this https URL

Abstract (translated)

手动操作是一个在机器人领域长期存在的挑战,由于需要大量自由度和严格的空间和时间同步来产生有意义的动作,使得实现有意义的行为变得具有挑战性。人类通过观察其他人类并通过游戏来提高他们的能力来学习双手操作技能。在这项工作中,我们旨在使机器人能够从人类视频演示中学习双手操作行为,并通过互动对其进行微调。受到心理学和生物力学中关键工作的启发,我们提出将两个手的交互建模为串行运动学链接——特别是螺钉运动,作为我们定义一个新的双手操作空间的方式:螺钉操作。我们引入了ScrewMimic框架,该框架利用这种新颖的动作表示来促进从人类演示中学习技能和自我监督策略微调。我们的实验结果表明,ScrewMimic能够从单个人类视频演示中学习多个复杂双手操作行为,并且它优于那些在原始运动空间中解释演示并进行微调的基线。更多信息和视频结果,请访问此链接:https:// URL

URL

https://arxiv.org/abs/2405.03666

PDF

https://arxiv.org/pdf/2405.03666.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot