Abstract
In this paper we propose that the dot product pairwise matching attention layer, which is widely used in transformer-based models, is redundant for the model performance. Attention in its original formulation has to be seen rather as a human-level tool to explore and/or visualize relevancy scores in the sequences. Instead, we present a simple and fast alternative without any approximation that, to the best of our knowledge, outperforms existing attention approximations on the text classification task from the Long-Range Arena benchmark.
Abstract (translated)
URL
https://arxiv.org/abs/2111.15588