Abstract
Scene graph generation has received growing attention with the advancements in image understanding tasks such as object detection, attributes and relationship prediction,~\etc. However, existing datasets are biased in terms of object and relationship labels, or often come with noisy and missing annotations, which makes the development of a reliable scene graph prediction model very challenging. In this paper, we propose a novel scene graph generation algorithm with external knowledge and image reconstruction loss to overcome these dataset issues. In particular, we extract commonsense knowledge from the external knowledge base to refine object and phrase features for improving generalizability in scene graph generation. To address the bias of noisy object annotations, we introduce an auxiliary image reconstruction path to regularize the scene graph generation network. Extensive experiments show that our framework can generate better scene graphs, achieving the state-of-the-art performance on two benchmark datasets: Visual Relationship Detection and Visual Genome datasets.
Abstract (translated)
随着图像理解任务(如目标检测、属性和关系预测等)的发展,场景图的生成受到越来越多的关注,但是现有的数据集在对象和关系标签方面存在偏差,或者经常出现噪声和缺失的注释,这使得开发一个可靠的场景图预测模型非常具有挑战性。本文提出了一种基于外部知识和图像重建损失的场景图生成算法,解决了这些数据集问题。特别是从外部知识库中提取常识性知识,细化对象和短语特征,提高场景图生成的可归纳性。为了解决噪声对象标注的偏差问题,我们引入了一条辅助图像重建路径来规范场景图生成网络。大量实验表明,我们的框架可以生成更好的场景图,在两个基准数据集上实现最先进的性能:视觉关系检测和视觉基因组数据集。
URL
https://arxiv.org/abs/1904.00560