Abstract
Relations amongst entities play a central role in image understanding. Due to the combinatorial complexity of modeling (subject, predicate, object) relation triplets, it is crucial to develop a method that can not only recognize seen relations, but also generalize well to unseen cases. Inspired by Visual Translation Embedding network (VTransE), we propose the Union Visual Translation Embedding network (UVTransE) to capture both common and rare relations with better accuracy. UVTransE maps the subject, the object, and the union (subject, object) image regions into a low-dimensional relation space where a predicate can be expressed as a vector subtraction, such that predicate $\approx$ union (subject, object) $-$ subject $-$ object. We present a comprehensive evaluation of our method on multiple challenging benchmarks: the Visual Relationship Detection dataset (VRD); UnRel dataset for rare and unusual relations; two subsets of Visual Genome; and the Open Images Challenge. Our approach decisively outperforms VTransE and comes close to or exceeds the state of the art across a range of settings, from small-scale to large-scale datasets, from common to previously unseen relations. On Visual Genome and Open Images, it also achieves promising results on the recently introduced task of scene graph generation.
Abstract (translated)
实体之间的关系在形象理解中起着核心作用。由于建模(主语、谓词、宾语)关系三元组的组合复杂性,开发一种既能识别所见关系,又能很好地推广到未发现的情况的方法至关重要。基于视觉翻译嵌入网络(VTRANSE)的启发,我们提出了联合视觉翻译嵌入网络(UVTRANSE),以更好地捕捉常见和罕见的关系。uvtranse将主题、对象和并集(主题、对象)图像区域映射到一个低维关系空间,其中谓词可以表示为矢量减法,这样谓词.大约$union(主题、对象).-$subject$-$object。我们在多个具有挑战性的基准上对我们的方法进行了全面评估:视觉关系检测数据集(VRD);罕见和异常关系的UNRL数据集;两个子集视觉基因组;以及开放图像挑战。我们的方法明显优于vTranse,并且在从小规模到大规模数据集、从公共关系到以前看不见的关系的一系列设置中接近或超过了最新技术。在视觉基因组和开放图像方面,对最近引入的场景图生成任务也取得了很好的效果。
URL
https://arxiv.org/abs/1905.11624