Abstract
To address the challenges of providing quick and plausible explanations in Explainable AI (XAI) for object detection models, we introduce the Gaussian Class Activation Mapping Explainer (G-CAME). Our method efficiently generates concise saliency maps by utilizing activation maps from selected layers and applying a Gaussian kernel to emphasize critical image regions for the predicted object. Compared with other Region-based approaches, G-CAME significantly reduces explanation time to 0.5 seconds without compromising the quality. Our evaluation of G-CAME, using Faster-RCNN and YOLOX on the MS-COCO 2017 dataset, demonstrates its ability to offer highly plausible and faithful explanations, especially in reducing the bias on tiny object detection.
Abstract (translated)
为了在 Explainable AI (XAI) 中提供快速和明确的解释,我们引入了 Gaussian Class Activation Mapping Explainer (G-CAME)。我们的方法通过利用预选层中的激活图并应用高斯核强调预测物体的关键图像区域,有效地生成简洁的轮廓图。与基于区域的其他方法相比,G-CAME 在不牺牲质量的情况下显著减少了解释时间至0.5秒。我们对G-CAME 在 MS-COCO 2017 数据集上的评估表明,它具有提供高度可信和准确解释的能力,尤其是在减小微小物体检测中的偏差方面。
URL
https://arxiv.org/abs/2404.13417