Abstract
Recent work in vision-and-language pretraining has investigated supervised signals from object detection data to learn better, fine-grained multimodal representations. In this work, we take a step further and explore how we add supervision from small-scale visual relation data. In particular, we propose two pretraining approaches to contextualise visual entities in a multimodal setup. With verbalised scene graphs, we transform visual relation triplets into structured captions, and treat them as additional views of images. With masked relation prediction, we further encourage relating entities from visually masked contexts. When applied to strong baselines pretrained on large amounts of Web data, zero-shot evaluations on both coarse-grained and fine-grained tasks show the efficacy of our methods in learning multimodal representations from weakly-supervised relations data.
Abstract (translated)
最近的 vision-and-language 预训练工作研究了从对象检测数据中学习的指令信号,以学习更精细的多模式表示。在这项工作中,我们进一步深入研究了从小型视觉关系数据中增加监督的方法。特别是,我们提出了两种预训练方法,以在一个多模式setup中上下文化视觉实体。通过语音化场景Graph,我们将视觉关系三视图转换为结构化标题,并将其视为图像额外的视角。通过掩码关系预测,我们进一步鼓励从视觉上掩码的上下文中相关的实体。将这些方法应用于大量的 Web 数据预训练的强基线,对粗粒度和精粒度任务进行的零样本评估表明,我们的方法从弱监督的关系数据学习多模式表示的有效性。
URL
https://arxiv.org/abs/2305.14281