Abstract
In computer vision, explainable AI (xAI) methods seek to mitigate the 'black-box' problem by making the decision-making process of deep learning models more interpretable and transparent. Traditional xAI methods concentrate on visualizing input features that influence model predictions, providing insights primarily suited for experts. In this work, we present an interaction-based xAI method that enhances user comprehension of image classification models through their interaction. Thus, we developed a web-based prototype allowing users to modify images via painting and erasing, thereby observing changes in classification results. Our approach enables users to discern critical features influencing the model's decision-making process, aligning their mental models with the model's logic. Experiments conducted with five images demonstrate the potential of the method to reveal feature importance through user interaction. Our work contributes a novel perspective to xAI by centering on end-user engagement and understanding, paving the way for more intuitive and accessible explainability in AI systems.
Abstract (translated)
在计算机视觉领域,可解释性AI(xAI)方法旨在通过使深度学习模型的决策过程更加可解释和透明来解决“黑盒子”问题。传统的xAI方法集中精力可视化影响模型预测的输入特征,为专家提供最适合的见解。在这项工作中,我们提出了一个基于交互的xAI方法,通过用户交互来增强用户对图像分类模型的理解。因此,我们开发了一个基于网页的原型,使用户可以通过涂画和擦除来修改图像,从而观察分类结果的变化。我们的方法使用户能够分辨影响模型决策过程的关键特征,将他们的思维模型与模型的逻辑对齐。用五张图像进行的实验证明了这种方法通过用户交互揭示特征的重要性。我们的工作为xAI领域提供了一个新的视角,将重点放在了最终用户的参与和理解上,为AI系统提供了更直观和易用的可解释性。
URL
https://arxiv.org/abs/2404.09828