Abstract
Natural language processing has made significant inroads into learning the semantics of words through distributional approaches, however representations learnt via these methods fail to capture certain kinds of information implicit in the real world. In particular, spatial relations are encoded in a way that is inconsistent with human spatial reasoning and lacking invariance to viewpoint changes. We present a system capable of capturing the semantics of spatial relations such as behind, left of, etc from natural language. Our key contributions are a novel multi-modal objective based on generating images of scenes from their textual descriptions, and a new dataset on which to train it. We demonstrate that internal representations are robust to meaning preserving transformations of descriptions (paraphrase invariance), while viewpoint invariance is an emergent property of the system.
Abstract (translated)
自然语言处理通过分布式方法在学习单词语义方面取得了重大进展,但通过这些方法学习的表征无法捕捉到现实世界中隐含的某些信息。特别地,空间关系以与人类空间推理不一致并且缺乏视点变化不变性的方式编码。我们提出了一种能够从自然语言中捕获诸如后面,左边等空间关系语义的系统。我们的主要贡献是一个新颖的多模式目标,它基于从文本描述生成场景图像,以及一个新的数据集来训练它。我们证明了内部表示对于保留描述变换(释义不变性)的意义是健壮的,而视点不变性是系统的新兴属性。
URL
https://arxiv.org/abs/1807.01670