Abstract
Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end. These models rely on representation learning to select content appropriately, structure it coherently, and verbalize it grammatically, treating entities as nothing more than vocabulary tokens. In this work we propose an entity-centric neural architecture for data-to-text generation. Our model creates entity-specific representations which are dynamically updated. Text is generated conditioned on the data input and entity memory representations using hierarchical attention at each time step. We present experiments on the RotoWire benchmark and a (five times larger) new dataset on the baseball domain which we create. Our results show that the proposed model outperforms competitive baselines in automatic and human evaluation.
Abstract (translated)
由于大规模数据集的使用以及端到端训练的神经网络体系结构的应用,最近的数据到文本生成方法显示出巨大的前景。这些模型依赖于表示学习来适当地选择内容、连贯地构造内容、语法地描述内容,将实体视为词汇标记。在这项工作中,我们提出了一个以实体为中心的神经架构,用于数据到文本的生成。我们的模型创建了动态更新的实体特定表示。在每个时间步骤中,根据数据输入和实体内存表示,使用分层关注生成文本。我们在RotoWire基准上进行了实验,并在我们创建的棒球领域提供了一个(五倍大)新的数据集。结果表明,该模型在自动评价和人工评价方面优于竞争基线。
URL
https://arxiv.org/abs/1906.03221