Abstract
Weakly supervised vision-and-language pre-training (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks. However, current WVLP methods use only local descriptions of images, i.e., object tags, as cross-modal anchors to construct weakly-aligned image-text pairs for pre-training. This affects the data quality and thus the effectiveness of pre-training. In this paper, we propose to directly take a small number of aligned image-text pairs as anchors, and represent each unaligned image and text by its similarities to these anchors, i.e., relative representations. We build a WVLP framework based on the relative representations, namely RELIT, which collects high-quality weakly-aligned image-text pairs from large-scale image-only and text-only data for pre-training through relative representation-based retrieval and generation. Experiments on four downstream tasks show that RELIT achieves new state-of-the-art results under the weakly supervised setting.
Abstract (translated)
弱监督的视觉和语言前训练(WVLP)通过有限的跨媒体监督学习跨媒体表示,能够 effectively 降低训练数据成本,同时保持后续任务中适度的性能。然而,当前的WVLP方法仅使用图像局部描述,也就是物体标签作为跨媒体锚点,用于构建较弱的对齐图像文本对进行前训练。这会影响数据质量和前训练的有效性。在本文中,我们提议直接使用少量对齐的图像文本对作为锚点,并使用这些锚点之间的相似性代表每个未对齐的图像和文本,即相对表示。我们基于相对表示构建WVLP框架,名为RELIT,通过相对表示方法从大规模图像和文本数据中收集高质量的较弱对齐图像文本对进行前训练。对于四个后续任务的实验结果表明,在弱监督条件下,RELIT取得了新的先进技术结果。
URL
https://arxiv.org/abs/2305.15483