Abstract
Visual relations are complex, multimodal concepts that play an important role in the way humans perceive the world. As a result of their complexity, high-quality, diverse and large scale datasets for visual relations are still absent. In an attempt to overcome this data barrier, we choose to focus on the problem of few-shot Visual Relationship Detection (VRD), a setting that has been so far neglected by the community. In this work we present the first pretraining method for few-shot predicate classification that does not require any annotated relations. We achieve this by introducing a generative model that is able to capture the variation of semantic, visual and spatial information of relations inside a latent space and later exploiting its representations in order to achieve efficient few-shot classification. We construct few-shot training splits and show quantitative experiments on VG200 and VRD datasets where our model outperforms the baselines. Lastly we attempt to interpret the decisions of the model by conducting various qualitative experiments.
Abstract (translated)
视觉关系是复杂、多模态的概念,在人类感知世界的方式中扮演着重要角色。由于其复杂性,高品质、多样化和大规模的视觉关系数据集仍然缺乏。为了克服这一数据障碍,我们选择关注一个被社区忽略的问题:少样本视觉关系检测(VRD)场景。在这项工作中,我们提出了第一个不需要任何注释关系预训练方法的少样本命题分类方法。我们通过引入一个生成模型,能够捕捉关系内部语义、视觉和空间信息的变化,然后利用其表示来进行高效的少样本分类。我们构建了少样本训练集,并在VG200和VRD数据集上进行了定量实验,结果表明我们的模型超越了基线。最后,我们通过进行各种定性实验尝试解释模型的决策。
URL
https://arxiv.org/abs/2311.16261