Paper Reading AI Learner

Transparency in Multi-Human Multi-Robot Interaction

2021-01-26 00:13:58
Jayam Patel, Tyagaraja Ramaswamy, Zhi Li, Carlo Pinciroli

Abstract

Transparency is a key factor in improving the performance of human-robot interaction. A transparent interface allows humans to be aware of the state of a robot and to assess the progress of the tasks at hand. When multi-robot systems are involved, transparency is an even greater challenge, due to the larger number of variables affecting the behavior of the robots as a whole. Significant effort has been devoted to studying transparency when single operators interact with multiple robots. However, studies on transparency that focus on multiple human operators interacting with a multi-robot systems are limited. This paper aims to fill this gap by presenting a human-swarm interaction interface with graphical elements that can be enabled and disabled. Through this interface, we study which graphical elements are contribute to transparency by comparing four "transparency modes": (i) no transparency (no operator receives information from the robots), (ii) central transparency (the operators receive information only relevant to their personal task), (iii) peripheral transparency (the operators share information on each others' tasks), and (iv) mixed transparency (both central and peripheral). We report the results in terms of awareness, trust, and workload of a user study involving 18 participants engaged in a complex multi-robot task.

Abstract (translated)

URL

https://arxiv.org/abs/2101.10495

PDF

https://arxiv.org/pdf/2101.10495.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot