Abstract
Placing a human in the loop may abate the risks of deploying AI systems in safety-critical settings (e.g., a clinician working with a medical AI system). However, mitigating risks arising from human error and uncertainty within such human-AI interactions is an important and understudied issue. In this work, we study human uncertainty in the context of concept-based models, a family of AI systems that enable human feedback via concept interventions where an expert intervenes on human-interpretable concepts relevant to the task. Prior work in this space often assumes that humans are oracles who are always certain and correct. Yet, real-world decision-making by humans is prone to occasional mistakes and uncertainty. We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans. We show that training with uncertain concept labels may help mitigate weaknesses of concept-based systems when handling uncertain interventions. These results allow us to identify several open challenges, which we argue can be tackled through future multidisciplinary research on building interactive uncertainty-aware systems. To facilitate further research, we release a new elicitation platform, UElic, to collect uncertain feedback from humans in collaborative prediction tasks.
Abstract (translated)
将人类纳入循环可能会减轻在安全关键环境中部署AI系统的风险(例如,一名临床医生与医疗AI系统工作的人)。然而,在这类人类-AI交互中减轻由人类错误和不确定性引起的风险是一个重要但尚未深入研究的问题。在本文中,我们研究了基于概念模型的人类不确定性,这是一个由专家通过概念干预提供人类反馈的AI系统家族。先前在这个领域的工作通常假设人类是可靠的预言者,总是准确无误。然而,人类在现实生活中的决策往往偶尔有误和不确定性。我们使用两个新的数据集研究了现有概念模型如何应对来自人类的不确定干预:UMNIST是一个基于MNIST数据集的视觉数据集,CUB-S是对广受赞誉的CUB概念数据集的重命名,其中从人类提供的丰富、密集注释软标签进行了重新分类。我们表明,训练带有不确定概念标签的数据可以帮助减轻概念模型的漏洞,这些结果使我们能够识别几个开放挑战,我们认为可以通过未来跨学科研究建立交互不确定性意识的系统来解决。为了促进进一步研究,我们发布了一个新的收集人类不确定反馈的平台UElic,用于协作预测任务。
URL
https://arxiv.org/abs/2303.12872