Abstract
AI systems' ability to explain their reasoning is critical to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA). However, most of them are opaque black boxes with limited explanatory capability. This paper presents a novel approach to developing a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning while capturing the style of comprehensible human explanations. Extensive experimental evaluation demonstrates the advantages of this approach compared to competing methods with both automatic evaluation metrics and human evaluation metrics.
Abstract (translated)
人工智能系统解释其推理的能力对其效用和可信度至关重要。深度神经网络已经在许多具有挑战性的问题上取得了重大进展,例如视觉问答(VQA)。然而,它们中的大多数是不透明的黑盒子,具有有限的解释能力。本文介绍了一种开发高性能VQA系统的新方法,该系统可以通过集成的文本和视觉解释来阐明其答案,这些解释忠实地反映了其潜在推理的重要方面,同时捕捉了可理解的人类解释的风格。广泛的实验评估表明,与具有自动评估指标和人工评估指标的竞争方法相比,该方法具有优势。
URL
https://arxiv.org/abs/1809.02805