Abstract
Pooling is an essential component of a wide variety of sentence representation and embedding models. This paper explores generalized pooling methods to enhance sentence embedding. We propose vector-based multi-head attention that includes the widely used max pooling, mean pooling, and scalar self-attention as special cases. The model benefits from properly designed penalization terms to reduce redundancy in multi-head attention. We evaluate the proposed model on three different tasks: natural language inference (NLI), author profiling, and sentiment classification. The experiments show that the proposed model achieves significant improvement over strong sentence-encoding-based methods, resulting in state-of-the-art performances on four datasets. The proposed approach can be easily implemented for more problems than we discuss in this paper.
Abstract (translated)
汇集是各种句子表示和嵌入模型的重要组成部分。本文探讨了用于增强句子嵌入的广义汇集方法。我们提出基于矢量的多头注意,包括广泛使用的最大池,平均池和标量自我注意作为特殊情况。该模型受益于恰当设计的惩罚条款,以减少多头注意的冗余。我们评估提出的模型在三个不同的任务:自然语言推理(NLI),作者剖析和情感分类。实验表明,所提出的模型在强大的基于句子编码的方法上实现了显着的改进,从而在四个数据集上实现了最先进的性能。所提出的方法可以很容易地应用于比我们在本文中讨论的更多的问题。
URL
https://arxiv.org/abs/1806.09828