Abstract
Explaining decisions of deep neural networks is a hot research topic with applications in medical imaging, video surveillance, and self driving cars. Many methods have been proposed in literature to explain these decisions by identifying relevance of different pixels. In this paper, we propose a method that can generate contrastive explanations for such data where we not only highlight aspects that are in themselves sufficient to justify the classification by the deep model, but also new aspects which if added will change the classification. One of our key contributions is how we define "addition" for such rich data in a formal yet humanly interpretable way that leads to meaningful results. This was one of the open questions laid out in Dhurandhar et.al. (2018) [5], which proposed a general framework for creating (local) contrastive explanations for deep models. We showcase the efficacy of our approach on CelebA and Fashion-MNIST in creating intuitive explanations that are also quantitatively superior compared with other state-of-the-art interpretability methods.
Abstract (translated)
深部神经网络的决策解释是一个热门的研究课题,在医学成像、视频监控和自动驾驶汽车中有着广泛的应用。文献中提出了许多方法,通过识别不同像素的相关性来解释这些决定。在本文中,我们提出了一种能够对这些数据产生对比解释的方法,这种方法不仅突出了那些本身足以证明深层模型分类正确的方面,而且还提出了一些新的方面,如果增加这些方面,将改变分类。我们的一个关键贡献是,我们如何以一种正式但人性化的方式为如此丰富的数据定义“添加”,从而产生有意义的结果。这是Dhurandhar等人提出的开放性问题之一。(2018年5月),提出了为深层模型创建(局部)对比解释的一般框架。我们展示了我们对Celeba和Fashion Mnist的方法在创建直观解释方面的效果,与其他最先进的解释方法相比,这种方法在数量上也更为优越。
URL
https://arxiv.org/abs/1905.12698