Paper Reading AI Learner

Fine-Grained is Too Coarse: A Novel Data-Centric Approach for Efficient Scene Graph Generation

2023-05-30 00:55:49
Neau Maëlic, Paulo Santos, Anne-Gwenn Bosser, Cédric Buche

Abstract

Learning to compose visual relationships from raw images in the form of scene graphs is a highly challenging task due to contextual dependencies, but it is essential in computer vision applications that depend on scene understanding. However, no current approaches in Scene Graph Generation (SGG) aim at providing useful graphs for downstream tasks. Instead, the main focus has primarily been on the task of unbiasing the data distribution for predicting more fine-grained relations. That being said, all fine-grained relations are not equally relevant and at least a part of them are of no use for real-world applications. In this work, we introduce the task of Efficient SGG that prioritizes the generation of relevant relations, facilitating the use of Scene Graphs in downstream tasks such as Image Generation. To support further approaches in this task, we present a new dataset, VG150-curated, based on the annotations of the popular Visual Genome dataset. We show through a set of experiments that this dataset contains more high-quality and diverse annotations than the one usually adopted by approaches in SGG. Finally, we show the efficiency of this dataset in the task of Image Generation from Scene Graphs. Our approach can be easily replicated to improve the quality of other Scene Graph Generation datasets.

Abstract (translated)

学习从原始图像构建场景图的形式来构建视觉关系是一项高度挑战的任务,因为它与环境上下文依赖关系有关,但在依赖于场景理解的视觉应用中是必不可少的。然而,当前在场景图生成(SGG)方面的方法和目标都没有旨在为后续任务提供有用的图形。相反,它们的主要焦点主要是在无偏见地分配数据分布以预测更精细的关系的任务上。尽管如此,所有精细的关系并不是同等重要的,至少其中一部分对于实际应用场景来说是无用的。在这项工作中,我们介绍了高效的SGG任务,该任务 prioritizes生成相关关系,并促进了场景图在 Image Generation 等后续任务中的应用。为了支持这一任务,我们介绍了基于 popular Visual Genome dataset 的注释的新数据集 VG150,并通过一组实验表明,该数据集比SGG方法通常采用的更高质量和多样性的注释更多。最后,我们展示了该数据集在从场景图生成的图像生成任务中的效率。我们的方法可以轻松地复制来提高其他场景图生成数据集的质量。

URL

https://arxiv.org/abs/2305.18668

PDF

https://arxiv.org/pdf/2305.18668.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot