Abstract
Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a semantic feature representation rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply. We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations are more human-interpretable than saliency maps alone, and can also improve performance on the classic task of attribute recognition. The ability of our approach to generalize is demonstrated on two datasets from very different domains, Polyvore Outfits and Animals with Attributes 2.
Abstract (translated)
解释一个深度学习模型可以帮助用户理解它的行为,并允许研究人员辨别它的缺点。最近的工作主要集中在解释图像分类或视觉问答等任务的模型。本文介绍了一种图像相似性模型的解释方法,即模型的输出是语义特征表示,而不是分类。在这个任务中,解释依赖于两个输入图像,因此标准方法不适用。我们提出了一种解释方法,将识别重要图像区域的显著性图与最能解释匹配的属性配对。我们发现,我们的解释比显著性图更容易被人理解,也可以提高属性识别经典任务的性能。我们的概括方法的能力在两个来自不同领域的数据集(多食动物装备和具有属性2的动物)上得到了证明。
URL
https://arxiv.org/abs/1905.10797