Abstract
Typically, research on Explainable Artificial Intelligence (XAI) focuses on black-box models within the context of a general policy in a known, specific domain. This paper advocates for the need for knowledge-agnostic explainability applied to the subfield of XAI called Explainable Search, which focuses on explaining the choices made by intelligent search techniques. It proposes Monte-Carlo Tree Search (MCTS) enhancements as a solution to obtaining additional data and providing higher-quality explanations while remaining knowledge-free, and analyzes the most popular enhancements in terms of the specific types of explainability they introduce. So far, no other research has considered the explainability of MCTS enhancements. We present a proof-of-concept that demonstrates the advantages of utilizing enhancements.
Abstract (translated)
通常,可解释人工智能(XAI)的研究集中在已知特定领域的通用政策下的黑盒模型上。本文提倡在XAI的一个子领域——可解释搜索中应用知识不可见的可解释性,该子领域专注于解释智能搜索技术所做选择的原因。文章提出了蒙特卡洛树搜索(MCTS)增强作为解决方案,在无需额外知识的情况下获取更多数据并提供更高质量的解释,并分析了最受欢迎的增强方法在引入特定类型可解释性方面的表现。迄今为止,还没有其他研究考虑过MCTS增强的可解释性。我们提出一个概念验证来展示利用增强的优势。
URL
https://arxiv.org/abs/2506.13223