Abstract
We present a syntax-infused variational autoencoder (SIVAE), that integrates sentences with their syntactic trees to improve the grammar of generated sentences. Distinct from existing VAE-based text generative models, SIVAE contains two separate latent spaces, for sentences and syntactic trees. The evidence lower bound objective is redesigned correspondingly, by optimizing a joint distribution that accommodates two encoders and two decoders. SIVAE works with long short-term memory architectures to simultaneously generate sentences and syntactic trees. Two versions of SIVAE are proposed: one captures the dependencies between the latent variables through a conditional prior network, and the other treats the latent variables independently such that syntactically-controlled sentence generation can be performed. Experimental results demonstrate the generative superiority of SIVAE on both reconstruction and targeted syntactic evaluations. Finally, we show that the proposed models can be used for unsupervised paraphrasing given different syntactic tree templates.
Abstract (translated)
我们提出了一种注入语法的变分自动编码器(SIVAE),它将句子与其句法树结合起来,以改进生成的句子的语法。与现有的基于vae的文本生成模型不同,sivae包含两个独立的潜在空间,用于句子和句法树。通过优化一个能容纳两个编码器和两个译码器的联合分布,相应地重新设计了证据下限目标。sivae与长期短期记忆结构一起工作,同时生成句子和句法树。提出了两种类型的sivae:一种是通过条件先验网络捕获潜在变量之间的依赖关系,另一种是对潜在变量进行独立处理,从而实现句法控制的句子生成。实验结果表明,sivae在重构和有针对性的句法评价方面具有生成优势。最后,我们证明了在不同的句法树模板下,所提出的模型可以用于无监督的解释。
URL
https://arxiv.org/abs/1906.02181