Abstract
Visual relationship detection is an intermediate image understanding task that detects two objects and classifies a predicate that explains the relationship between two objects in an image. The three components are linguistically and visually correlated (e.g. "wear" is related to "person" and "shirt", while "laptop" is related to "table" and "on") thus, the solution space is huge because there are many possible cases between them. Language and visual modules are exploited and a sophisticated spatial vector is proposed. The models in this work outperformed the state of arts without costly linguistic knowledge distillation from a large text corpus and building complex loss functions. All experiments were only evaluated on Visual Relationship Detection and Visual Genome dataset.
Abstract (translated)
视觉关系检测是一个中间的图像理解任务,它检测两个对象,并对一个谓词进行分类,从而解释图像中两个对象之间的关系。这三个组件在语言和视觉上是相关联的(例如,“穿戴”与“人”和“衬衫”相关,“笔记本”与“桌子”和“开”相关),因此,解决方案空间很大,因为它们之间有许多可能的情况。利用语言和视觉模块,提出了一种复杂的空间矢量。这项工作中的模型在没有从大文本语料库中提炼出昂贵的语言知识和构建复杂的损失函数的情况下,表现出优于现有的技术水平。所有实验仅在视觉关系检测和视觉基因组数据集上进行评估。
URL
https://arxiv.org/abs/1904.07798