Paper Reading AI Learner

Hallucinate, Ground, Repeat: A Framework for Generalized Visual Relationship Detection

2025-06-06 00:43:15
Shanmukha Vellamcheti, Sanjoy Kundu, Sathyanarayanan N. Aakur

Abstract

Understanding relationships between objects is central to visual intelligence, with applications in embodied AI, assistive systems, and scene understanding. Yet, most visual relationship detection (VRD) models rely on a fixed predicate set, limiting their generalization to novel interactions. A key challenge is the inability to visually ground semantically plausible, but unannotated, relationships hypothesized from external knowledge. This work introduces an iterative visual grounding framework that leverages large language models (LLMs) as structured relational priors. Inspired by expectation-maximization (EM), our method alternates between generating candidate scene graphs from detected objects using an LLM (expectation) and training a visual model to align these hypotheses with perceptual evidence (maximization). This process bootstraps relational understanding beyond annotated data and enables generalization to unseen predicates. Additionally, we introduce a new benchmark for open-world VRD on Visual Genome with 21 held-out predicates and evaluate under three settings: seen, unseen, and mixed. Our model outperforms LLM-only, few-shot, and debiased baselines, achieving mean recall (mR@50) of 15.9, 13.1, and 11.7 on predicate classification on these three sets. These results highlight the promise of grounded LLM priors for scalable open-world visual understanding.

Abstract (translated)

理解物体之间的关系是视觉智能的核心,这在具身人工智能、辅助系统和场景理解中有着广泛的应用。然而,大多数视觉关系检测(VRD)模型依赖于固定的谓词集,限制了它们对新出现的交互的泛化能力。一个关键挑战在于无法将语义上合理但未标注的关系与外部知识相结合,并在视觉上进行定位。这项工作引入了一个迭代的视觉定位框架,该框架利用大型语言模型(LLMs)作为结构化的关系先验。受期望最大化(EM)算法启发,我们的方法通过交替使用LLM生成候选场景图(期望步骤)和训练视觉模型以使这些假设与感知证据对齐(极大化步骤),从而在注释数据之外引导关系理解,并能够泛化到未见过的谓词上。此外,我们还在Visual Genome上为开放世界的VRD引入了一个新的基准测试,在这个测试中有21个保留的谓词,并且我们在三种设置下进行了评估:已见(seen)、未知(unseen)和混合(mixed)。我们的模型在谓词分类上超越了仅使用LLM、少量样本学习(few-shot)和去偏基线,分别在这三个数据集上的平均召回率(mR@50)达到了15.9、13.1和11.7。这些结果突显了以视觉为基础的LLM先验在可扩展开放世界视觉理解中的潜力。

URL

https://arxiv.org/abs/2506.05651

PDF

https://arxiv.org/pdf/2506.05651.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot