Abstract
This paper introduces the Confidence Optimization (CO) score to directly measure the contribution of heatmaps/saliency maps to the classification performance of a model. Common heatmap generation methods used in the eXplainable Artificial Intelligence (XAI) community are tested through a process we call the Augmentative eXplanation (AX). We find a surprising \textit{gap} in CO scores distribution on these heatmap methods. The gap potentially serves as a novel indicator for the correctness of deep neural network (DNN) prediction. We further introduces Generative AX (GAX) method to generate saliency maps capable of attaining high CO scores. Using GAX, we also qualitatively demonstrate the unintuitiveness of DNN architectures.
Abstract (translated)
URL
https://arxiv.org/abs/2201.00009