Abstract
Deep visual models have widespread applications in high-stake domains. Hence, their black-box nature is currently attracting a large interest of the research community. We present the first survey in Explainable AI that focuses on the methods and metrics for interpreting deep visual models. Covering the landmark contributions along the state-of-the-art, we not only provide a taxonomic organization of the existing techniques, but also excavate a range of evaluation metrics and collate them as measures of different properties of model explanations. Along the insightful discussion on the current trends, we also discuss the challenges and future avenues for this research direction.
Abstract (translated)
深度视觉模型在高风险领域有着广泛的应用。因此,它们的黑盒性质目前吸引了研究社区的巨大兴趣。我们提出了Explainable AI领域的第一项调查,专注于解释深度视觉模型的方法和指标。涵盖了当前技术水平的关键贡献,我们不仅提供了现有的技术的族裔组织,还挖掘了各种评估指标,并将它们整理成衡量模型解释不同性质的指标。在深入探讨当前趋势的同时,我们还探讨了 this 研究方向所面临的挑战和未来的发展前景。
URL
https://arxiv.org/abs/2301.13445