Paper Reading AI Learner

Graphical Contrastive Losses for Scene Graph Generation

2019-03-07 05:07:43
Ji Zhang, Kevin J. Shih, Ahmed Elgammal, Andrew Tao, Bryan Catanzaro

Abstract

Most scene graph generators use a two-stage pipeline to detect visual relationships: the first stage detects entities, and the second predicts the predicate for each entity pair using a softmax distribution. We find that such pipelines, trained with only a cross entropy loss over predicate classes, suffer from two common errors. The first, Entity Instance Confusion, occurs when the model confuses multiple instances of the same type of entity (e.g. multiple cups). The second, Proximal Relationship Ambiguity, arises when multiple subject-predicate-object triplets appear in close proximity with the same predicate, and the model struggles to infer the correct subject-object pairings (e.g. mis-pairing musicians and their instruments). We propose a set of contrastive loss formulations that specifically target these types of errors within the scene graph generation problem, collectively termed the Graphical Contrastive Losses. These losses explicitly force the model to disambiguate related and unrelated instances through margin constraints specific to each type of confusion. We further construct a relationship detector, called RelDN, using the aforementioned pipeline to demonstrate the efficacy of our proposed losses. Our model outperforms the winning method of the OpenImages Relationship Detection Challenge by 4.7\% (16.5\% relative) on the test set. We also show improved results over the best previous methods on the Visual Genome and Visual Relationship Detection datasets.

Abstract (translated)

大多数场景图生成器使用两阶段的管道来检测可视关系:第一阶段检测实体,第二阶段使用SoftMax分布预测每个实体对的谓词。我们发现这样的管道,在谓词类上只训练了一个交叉熵损失,它会遇到两个常见的错误。第一种是实体实例混淆,当模型混淆同一类型实体(例如多个CUP)的多个实例时,会发生混淆。第二种是近端关系模糊,当多个主语-谓词-对象三元组与同一个谓词出现在近端时,模型难以推断出正确的主语-对象配对(例如,音乐家及其乐器的错误配对)。我们提出了一组对比损失公式,专门针对场景图生成问题中的这些类型的错误,统称为图形对比损失。这些损失明确地迫使模型通过特定于每种混乱类型的边界约束来消除相关和不相关的实例的歧义。我们进一步构建了一个称为reldn的关系检测器,使用上述管道来证明我们提出的损失的有效性。我们的模型在测试集上比OpenImages关系检测挑战的取胜方法强4.7%(相对16.5%)。在视觉基因组和视觉关系检测数据集上,我们也显示了比以前最好的方法更好的结果。

URL

https://arxiv.org/abs/1903.02728

PDF

https://arxiv.org/pdf/1903.02728.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot