Abstract
The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. The current state-of-the-art method uses a sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR structure. Although being able to model non-local semantic information, a sequence LSTM can lose information from the AMR graph structure, and thus faces challenges with large graphs, which result in long sequences. We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature.
Abstract (translated)
AMR到文本生成的问题是恢复表示与输入AMR图相同含义的文本。当前最先进的方法使用序列到序列模型,利用LSTM来编码线性化AMR结构。虽然能够对非局部语义信息建模,但是序列LSTM可能丢失来自AMR图结构的信息,因此面临大图的挑战,这导致长序列。我们引入了一个神经图到序列模型,使用一种新颖的LSTM结构直接编码图级语义。在标准基准测试中,我们的模型显示出优于文献中现有方法的结果。
URL
https://arxiv.org/abs/1805.02473