Abstract
Natural language processing has greatly benefited from the introduction of the attention mechanism. However, standard attention models are of limited interpretability for tasks that involve a series of inference steps. We describe an iterative recursive attention model, which constructs incremental representations of input data through reusing results of previously computed queries. We train our model on sentiment classification datasets and demonstrate its capacity to identify and combine different aspects of the input in an easily interpretable manner, while obtaining performance close to the state of the art.
Abstract (translated)
自然语言处理从引入注意机制中受益匪浅。然而,标准注意力模型对涉及一系列推理步骤的任务具有有限的可解释性。我们描述了一个迭代递归注意模型,它通过重用先前计算的查询的结果来构造输入数据的增量表示。我们在情感分类数据集上训练我们的模型,并展示其以易于理解的方式识别和组合输入的不同方面的能力,同时获得接近现有技术水平的表现。
URL
https://arxiv.org/abs/1808.10503