Abstract
Many explainable AI (XAI) techniques strive for interpretability by providing concise salient information, such as sparse linear factors. However, users either only see inaccurate global explanations, or highly-varying local explanations. We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details. Focusing on linear factor explanations (factors $\times$ values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations. Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases. In modeling, formative, and summative user studies, we evaluated the faithfulness, memorability and understandability of Incremental XAI against baseline explanation methods. This work contributes towards more usable explanation that users can better ingrain to facilitate intuitive engagement with AI.
Abstract (translated)
许多可解释人工智能(XAI)技术通过提供简洁明了的信息,如稀疏线性因素,试图实现可解释性。然而,用户可能只看到不准确的全局解释,或者高度分散的局部解释。我们通过利用人类知识累积能力,通过逐步接收更多细节来提供更多详细解释。专注于线性因素解释(因素 $\times$ 值 = 结果),我们引入了递增式XAI,通过提供基线+递增因素来帮助用户阅读和记忆更准确的解释。通过重用基因素并减少异常情况中显示的因子数量,可以提高记忆性。在建模、形成性以及总结性用户研究中,我们评估了递增式XAI相对于基线解释方法的 faithfulness、memorability 和understandability。这项工作为用户能够更好地理解和内置AI提供了更有用的解释,从而促进了用户与AI的直觉性互动。
URL
https://arxiv.org/abs/2404.06733