Abstract
This paper proposes an alternative approach to the basic taxonomy of explanations produced by explainable artificial intelligence techniques. Methods of Explainable Artificial Intelligence (XAI) were developed to answer the question why a certain prediction or estimation was made, preferably in terms easy to understand by the human agent. XAI taxonomies proposed in the literature mainly concentrate their attention on distinguishing explanations with respect to involving the human agent, which makes it complicated to provide a more mathematical approach to distinguish and compare different explanations. This paper narrows its attention to the cases where the data set of interest belongs to $\mathbb{R} ^n$ and proposes a simple linear algebra-based taxonomy for local explanations.
Abstract (translated)
本论文提出了一种替代方法,用于解决由可解释人工智能技术产生解释的基本分类问题。可解释人工智能方法(XAI)是为了回答为什么做出了某个预测或估计而开发的,最好以人类易于理解的方式。在文献中提出的XAI分类主要关注区分涉及人类因素的解释,这使得提供更加数学的方法区分和比较不同解释变得复杂。本论文将注意力集中在感兴趣的数据集属于$mathbb{R} ^n$的情况下,并提出了基于线性代数的简单代数分类方法,用于本地解释。
URL
https://arxiv.org/abs/2301.13138