Paper Reading AI Learner

Learning Multi-Step Robotic Tasks from Observation

2018-06-29 01:11:56
Wonjoon Goo, Scott Niekum

Abstract

Due to burdensome data requirements, learning from demonstration often falls short of its promise to allow users to quickly and naturally program robots. Demonstrations are inherently ambiguous and incomplete, making a correct generalization to unseen situations difficult without a large number of demonstrations in varying conditions. By contrast, humans are often able to learn complex tasks from a single demonstration (typically observations without action labels) by leveraging context learned over a lifetime. Inspired by this capability, we aim to enable robots to perform one-shot learning of multi-step tasks from observation by leveraging auxiliary video data as context. Our primary contribution is a novel action localization algorithm that identifies clips of activities in auxiliary videos that match the activities in a user-segmented demonstration, providing additional examples of each. While this auxiliary video data could be used in multiple ways for learning, we focus on an inverse reinforcement learning setting. We empirically show that across several tasks, robots can learn multi-step tasks more effectively from videos with localized actions, compared to unsegmented videos.

Abstract (translated)

由于数据需求繁重,从示范学习经常没有达到允许用户快速自然地编程机器人的承诺。示威本质上是含糊不清和不完整的,如果没有在各种条件下进行大量示威,就会对难以理解的情况做出正确的概括。相比之下,人类通常能够通过利用一生中学到的背景,从单一演示(通常是没有动作标签的观察)中学习复杂的任务。受此启发,我们旨在通过利用辅助视频数据作为背景,使机器人能够从观察中对多步骤任务进行一次性学习。我们的主要贡献是一种新颖的动作定位算法,可识别与用户分段演示中的活动相匹配的辅助视频中的活动剪辑,并提供每个演示的附加示例。虽然这种辅助视频数据可以多种方式用于学习,但我们专注于反强化学习设置。我们凭经验证明,与未分段的视频相比,在多个任务中,机器人可以从具有本地化操作的视频中更有效地学习多步任务。

URL

https://arxiv.org/abs/1806.11244

PDF

https://arxiv.org/pdf/1806.11244.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot