Abstract
Recent work on interpretability has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts. Concept learning models, however, have been shown to be prone to encoding impurities in their representations, failing to fully capture meaningful features of their inputs. While concept learning lacks metrics to measure such phenomena, the field of disentanglement learning has explored the related notion of underlying factors of variation in the data, with plenty of metrics to measure the purity of such factors. In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches. We show the advantage of these metrics over existing ones and demonstrate their utility in evaluating the robustness of concept representations and interventions performed on them. In addition, we show their utility for benchmarking state-of-the-art methods from both families and find that, contrary to common assumptions, supervision alone may not be sufficient for pure concept representations.
Abstract (translated)
最近的 interpretability 研究重点都放在基于概念的解释上,即使用高层次的信息单元(称为概念)来解释深度学习模型。然而,概念学习模型已被证明容易在它们的表示中编码杂质,无法完全捕捉到输入有意义的特征。虽然概念学习缺乏用于衡量这种现象的指标,分离学习领域研究了数据中相关的基础变化因素,有很多指标用于衡量这些因素的纯度。在本文中,我们表明,这些指标不适合用于概念学习,并提出了一种新的指标,用于评估概念表示的纯度,在这两种方法中。我们展示了这些指标的优点,并证明它们在评估概念表示的稳健性和对其所做的干预的评价方面的有效性。此外,我们用于比较这两个家族中的最新方法,并发现,与常见的假设相反,监督可能不足以用于纯概念表示。
URL
https://arxiv.org/abs/2301.10367