Abstract
Nowadays, large-scale foundation models are being increasingly integrated into numerous safety-critical applications, including human-autonomy teaming (HAT) within transportation, medical, and defence domains. Consequently, the inherent 'black-box' nature of these sophisticated deep neural networks heightens the significance of fostering mutual understanding and trust between humans and autonomous systems. To tackle the transparency challenges in HAT, this paper conducts a thoughtful study on the underexplored domain of Explainable Interface (EI) in HAT systems from a human-centric perspective, thereby enriching the existing body of research in Explainable Artificial Intelligence (XAI). We explore the design, development, and evaluation of EI within XAI-enhanced HAT systems. To do so, we first clarify the distinctions between these concepts: EI, explanations and model explainability, aiming to provide researchers and practitioners with a structured understanding. Second, we contribute to a novel framework for EI, addressing the unique challenges in HAT. Last, our summarized evaluation framework for ongoing EI offers a holistic perspective, encompassing model performance, human-centered factors, and group task objectives. Based on extensive surveys across XAI, HAT, psychology, and Human-Computer Interaction (HCI), this review offers multiple novel insights into incorporating XAI into HAT systems and outlines future directions.
Abstract (translated)
如今,大型基础模型正日益集成到许多关键应用中,包括运输、医疗和军事领域。因此,这些复杂深度神经网络固有的“黑盒子”性质加剧了在人类和自主系统之间促进相互理解和信任的重要性。为了应对HAT中透明度挑战,本文从人类中心的角度对HAT系统中的可解释接口(EI)进行了一项深入的研究,从而为现有的可解释人工智能(XAI)研究提供了丰富的内容。我们研究了在XAI增强的HAT系统中的EI的设计、开发和评估。为此,我们首先明确了这些概念之间的区别:EI、解释和模型可解释性,旨在为研究人员和实践者提供结构化的理解。其次,我们为EI提供了一个新的框架,解决了HAT中独特的挑战。最后,我们针对正在进行中的EI的总结评估框架提供了一个整体视角,包括模型性能、人类因素和团队任务目标。基于对XAI、HAT、心理学和人机交互(HCI)领域的广泛调查,本文综述为将XAI融入HAT系统提供了多个新的见解,并为未来的研究提供了方向。
URL
https://arxiv.org/abs/2405.02583