Abstract
Attention mechanisms have recently boosted performance on a range of NLP tasks. Because attention layers explicitly weight input components' representations, it is also often assumed that attention can be used to identify information that models found important (e.g., specific contextualized word tokens). We test whether that assumption holds by manipulating attention weights in already-trained text classification models and analyzing the resulting differences in their predictions. While we observe some ways in which higher attention weights correlate with greater impact on model predictions, we also find many ways in which this does not hold, i.e., where gradient-based rankings of attention weights better predict their effects than their magnitudes. We conclude that while attention noisily predicts input components' overall importance to a model, it is by no means a fail-safe indicator.
Abstract (translated)
注意力机制最近提高了一系列NLP任务的性能。由于关注层显式地对输入组件的表示进行加权,因此通常还假定关注可以用于识别模型认为重要的信息(例如特定上下文化的单词标记)。我们通过在已经训练过的文本分类模型中操纵注意力权重,并分析由此产生的预测差异来测试这一假设是否成立。虽然我们观察到一些更高的注意力权重与对模型预测的更大影响相关的方法,但我们也发现许多方法都不适用,即基于梯度的注意力权重排名比其大小更好地预测其效果。我们得出的结论是,虽然关注度很难预测输入组件对模型的整体重要性,但它决不是一个故障安全指示器。
URL
https://arxiv.org/abs/1906.03731