Paper Reading AI Learner

TIP: A Trust Inference and Propagation Model in Multi-Human Multi-Robot Teams

2023-01-26 04:25:53
Yaohui Guo, X. Jessie Yang, Cong Shi

Abstract

Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.

Abstract (translated)

信任已被确认为有效人类机器人组队的关键因素。现有的信任建模文献主要关注一个人类代理和一个机器人之间的二叉关系人类自主团队。在由多个人类代理和多个机器人组成的团队中,几乎没有研究信任建模,即使有也是很少的。为了填补这个研究空缺,我们提出了信任推理和传播(TIP)模型,用于在多人类多机器人团队中的信任建模。我们宣称,在多人类多机器人团队中,任何人类代理与任何机器人的经历有两种类型:直接和间接的经历。TIP模型提出了一个新的数学框架, explicitly accounts for both types of experiences。为了评估模型,我们进行了一项人类受试者实验,共有15对参与者(N=30),每个对都执行了用两个无人机进行的搜索和检测任务。结果显示,我们的TIP模型成功地捕捉到了背后的信任动态,并显著优于基准模型。据我们所知,TIP模型是多人类多机器人团队中计算信任建模的第一个数学框架。

URL

https://arxiv.org/abs/2301.10928

PDF

https://arxiv.org/pdf/2301.10928.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot