Paper Reading AI Learner

Towards Flexible Visual Relationship Segmentation

2024-08-15 17:57:38
Fangrui Zhu, Jianwei Yang, Huaizu Jiang

Abstract

Visual relationship understanding has been studied separately in human-object interaction(HOI) detection, scene graph generation(SGG), and referring relationships(RR) tasks. Given the complexity and interconnectedness of these tasks, it is crucial to have a flexible framework that can effectively address these tasks in a cohesive manner. In this work, we propose FleVRS, a single model that seamlessly integrates the above three aspects in standard and promptable visual relationship segmentation, and further possesses the capability for open-vocabulary segmentation to adapt to novel scenarios. FleVRS leverages the synergy between text and image modalities, to ground various types of relationships from images and use textual features from vision-language models to visual conceptual understanding. Empirical validation across various datasets demonstrates that our framework outperforms existing models in standard, promptable, and open-vocabulary tasks, e.g., +1.9 $mAP$ on HICO-DET, +11.4 $Acc$ on VRD, +4.7 $mAP$ on unseen HICO-DET. Our FleVRS represents a significant step towards a more intuitive, comprehensive, and scalable understanding of visual relationships.

Abstract (translated)

视觉关系理解已经在人机交互(HOI)检测、场景图生成(SGG)和引用关系(RR)任务中进行了单独研究。考虑到这些任务的复杂性和相互关联性,有必要提供一个灵活的框架,能够以一种集成的方式有效地解决这些任务。在本文中,我们提出了FleVRS,一种将上述三个方面无缝集成在标准和提示性视觉关系分割中的单一模型,并进一步具有适应新场景的开放词汇分割能力。FleVRS利用文本和图像模态之间的协同作用,将各种类型的关系从图像中 grounded,并使用视觉语言模型中的文本特征进行视觉概念理解。通过不同数据集的实证验证,我们的框架在标准、提示性和开放词汇任务中优于现有模型,例如+1.9 $mAP$在HICO-DET,+11.4 $Acc$在VRD,+4.7 $mAP$在未见过的HICO-DET上。FleVRS代表了一个明显朝着更直观、全面和可扩展的视觉关系理解方向迈出的重要一步。

URL

https://arxiv.org/abs/2408.08305

PDF

https://arxiv.org/pdf/2408.08305.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot