Abstract
Causal abstraction is a promising theoretical framework for explainable artificial intelligence that defines when an interpretable high-level causal model is a faithful simplification of a low-level deep learning system. However, existing causal abstraction methods have two major limitations: they require a brute-force search over alignments between the high-level model and the low-level one, and they presuppose that variables in the high-level model will align with disjoint sets of neurons in the low-level one. In this paper, we present distributed alignment search (DAS), which overcomes these limitations. In DAS, we find the alignment between high-level and low-level models using gradient descent rather than conducting a brute-force search, and we allow individual neurons to play multiple distinct roles by analyzing representations in non-standard bases-distributed representations. Our experiments show that DAS can discover internal structure that prior approaches miss. Overall, DAS removes previous obstacles to conducting causal abstraction analyses and allows us to find conceptual structure in trained neural nets.
Abstract (translated)
因果关系抽象是一种有希望的可解释人工智能的理论框架,它定义了当一个可解释的高级因果关系模型是低层次的深度学习系统的准确简化时。然而,现有的因果关系抽象方法有两个主要限制:它们需要对高级别模型和低级别模型之间的对齐进行直观的搜索,并且它们假设高级别模型中的变量将与低级别模型中的离散神经元组成的组合对齐。在本文中,我们介绍了分布式对齐搜索(DAS),它克服了这些限制。在DAS中,我们使用梯度下降而不是直观的搜索来找到高级别和低级别模型之间的对齐,并且通过分析非标准基分布表示中的表示,我们可以允许每个神经元扮演多个不同的角色,从而使个体神经元能够扮演多个不同的角色。我们的实验结果表明,DAS可以发现之前方法所错过的内部结构。总的来说,DAS消除了进行因果关系抽象分析之前所面临的障碍,使我们能够在训练神经网络中查找概念结构。
URL
https://arxiv.org/abs/2303.02536