Paper Reading AI Learner

LEADOR: A Method for End-to-End Participatory Design of Autonomous Social Robots

2021-05-05 07:50:23
Katie Winkle, Emmanuel Senft, Séverin Lemaignan

Abstract

Participatory Design (PD) in Human-Robot Interaction (HRI) typically remains limited to the early phases of development, with subsequent robot behaviours then being hardcoded by engineers or utilised in Wizard-of-Oz (WoZ) systems that rarely achieve autonomy. We present LEADOR (Led-by-Experts Automation and Design Of Robots) an end-to-end PD methodology for domain expert co-design, automation and evaluation of social robots. LEADOR starts with typical PD to co-design the interaction specifications and state and action space of the robot. It then replaces traditional offline programming or WoZ by an in-situ, online teaching phase where the domain expert can live-program or teach the robot how to behave while being embedded in the interaction context. We believe that this live teaching can be best achieved by adding a learning component to a WoZ setup, to capture experts' implicit knowledge, as they intuitively respond to the dynamics of the situation. The robot progressively learns an appropriate, expert-approved policy, ultimately leading to full autonomy, even in sensitive and/or ill-defined environments. However, LEADOR is agnostic to the exact technical approach used to facilitate this learning process. The extensive inclusion of the domain expert(s) in robot design represents established responsible innovation practice, lending credibility to the system both during the teaching phase and when operating autonomously. The combination of this expert inclusion with the focus on in-situ development also means LEADOR supports a mutual shaping approach to social robotics. We draw on two previously published, foundational works from which this (generalisable) methodology has been derived in order to demonstrate the feasibility and worth of this approach, provide concrete examples in its application and identify limitations and opportunities when applying this framework in new environments.

Abstract (translated)

URL

https://arxiv.org/abs/2105.01910

PDF

https://arxiv.org/pdf/2105.01910.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot