Abstract
Explanations obtained from transformer-based architectures in the form of raw attention, can be seen as a class-agnostic saliency map. Additionally, attention-based pooling serves as a form of masking the in feature space. Motivated by this observation, we design an attention-based pooling mechanism intended to replace Global Average Pooling (GAP) at inference. This mechanism, called Cross-Attention Stream (CA-Stream), comprises a stream of cross attention blocks interacting with features at different network depths. CA-Stream enhances interpretability in models, while preserving recognition performance.
Abstract (translated)
通过Transformer架构获得的解释,以原始注意形式表示,可以看作是一个类无关的显著性图。此外,基于注意的池化作为一种对特征空间进行遮蔽的形式。为了实现这个目标,我们设计了一个基于注意的池化机制,旨在在推理过程中取代全局平均池化(GAP)。这个机制被称为跨注意流(CA-Stream),它包括一系列与不同网络深度的特征交互的跨注意块。CA-Stream提高了模型的可解释性,同时保留了识别性能。
URL
https://arxiv.org/abs/2404.14996