Paper Reading AI Learner

Compositional Prompt Tuning with Motion Cues for Open-vocabulary Video Relation Detection

2023-02-01 06:20:54
Kaifeng Gao, Long Chen, Hanwang Zhang, Jun Xiao, Qianru Sun

Abstract

Prompt tuning with large-scale pretrained vision-language models empowers open-vocabulary predictions trained on limited base categories, e.g., object classification and detection. In this paper, we propose compositional prompt tuning with motion cues: an extended prompt tuning paradigm for compositional predictions of video data. In particular, we present Relation Prompt (RePro) for Open-vocabulary Video Visual Relation Detection (Open-VidVRD), where conventional prompt tuning is easily biased to certain subject-object combinations and motion patterns. To this end, RePro addresses the two technical challenges of Open-VidVRD: 1) the prompt tokens should respect the two different semantic roles of subject and object, and 2) the tuning should account for the diverse spatio-temporal motion patterns of the subject-object compositions. Without bells and whistles, our RePro achieves a new state-of-the-art performance on two VidVRD benchmarks of not only the base training object and predicate categories, but also the unseen ones. Extensive ablations also demonstrate the effectiveness of the proposed compositional and multi-mode design of prompts. Code is available at this https URL.

Abstract (translated)

使用大规模预训练的视觉语言模型,可以赋予基于有限基类的开放词汇量预测能力,例如对象分类和检测。在本文中,我们提出了基于运动的构造性promptTuning:一种扩展了promptTuning的视频数据构造性预测范式。特别是,我们提出了Open-VidVRD(Open-VidVRD)的关联prompt(RePro),该promptTuning在传统promptTuning中容易偏向某些主题-对象组合和运动模式。为此,RePro解决了Open-VidVRD的两个技术挑战:1)prompt tokens应尊重主题和对象两个不同语义角色;2)Tuning应考虑到主题-对象组合的多种时间和空间运动模式。此外,我们没有添加任何花哨的功能,我们的RePro在两个 VidVRD基准测试中取得了新的先进技术表现,不仅基训练对象和候选类别,还包括未曾见过的对象。此外, extensive ablations还证明了我们所提出的构造性和多模式设计prompt的有效性。代码在此httpsURL可用。

URL

https://arxiv.org/abs/2302.00268

PDF

https://arxiv.org/pdf/2302.00268.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot