Paper Reading AI Learner

Union Visual Translation Embedding for Visual Relationship Detection and Scene Graph Generation

2019-05-28 06:10:02
Zih-Siou Hung, Arun Mallya, Svetlana Lazebnik

Abstract

Relations amongst entities play a central role in image understanding. Due to the combinatorial complexity of modeling (subject, predicate, object) relation triplets, it is crucial to develop a method that can not only recognize seen relations, but also generalize well to unseen cases. Inspired by Visual Translation Embedding network (VTransE), we propose the Union Visual Translation Embedding network (UVTransE) to capture both common and rare relations with better accuracy. UVTransE maps the subject, the object, and the union (subject, object) image regions into a low-dimensional relation space where a predicate can be expressed as a vector subtraction, such that predicate $\approx$ union (subject, object) $-$ subject $-$ object. We present a comprehensive evaluation of our method on multiple challenging benchmarks: the Visual Relationship Detection dataset (VRD); UnRel dataset for rare and unusual relations; two subsets of Visual Genome; and the Open Images Challenge. Our approach decisively outperforms VTransE and comes close to or exceeds the state of the art across a range of settings, from small-scale to large-scale datasets, from common to previously unseen relations. On Visual Genome and Open Images, it also achieves promising results on the recently introduced task of scene graph generation.

Abstract (translated)

实体之间的关系在形象理解中起着核心作用。由于建模(主语、谓词、宾语)关系三元组的组合复杂性,开发一种既能识别所见关系,又能很好地推广到未发现的情况的方法至关重要。基于视觉翻译嵌入网络(VTRANSE)的启发,我们提出了联合视觉翻译嵌入网络(UVTRANSE),以更好地捕捉常见和罕见的关系。uvtranse将主题、对象和并集(主题、对象)图像区域映射到一个低维关系空间,其中谓词可以表示为矢量减法,这样谓词.大约$union(主题、对象).-$subject$-$object。我们在多个具有挑战性的基准上对我们的方法进行了全面评估:视觉关系检测数据集(VRD);罕见和异常关系的UNRL数据集;两个子集视觉基因组;以及开放图像挑战。我们的方法明显优于vTranse,并且在从小规模到大规模数据集、从公共关系到以前看不见的关系的一系列设置中接近或超过了最新技术。在视觉基因组和开放图像方面,对最近引入的场景图生成任务也取得了很好的效果。

URL

https://arxiv.org/abs/1905.11624

PDF

https://arxiv.org/pdf/1905.11624.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot