Abstract
The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.
Abstract (translated)
视觉和语言预处理的成本由于大规模模型的端到端训练变得越来越昂贵。本文提出了BLIP-2,一个通用且高效的预处理策略,可以从现有的 frozen 预训练图像编码器和 frozen 大型语言模型中Bootstrap 视觉和语言预处理。BLIP-2通过与轻量级查询Transformer相结合,克服了模式差异之间的差距。预处理阶段从一个 frozen 图像编码器Bootstrap 视觉语言表示学习,第二个阶段从一个 frozen 语言模型Bootstrap 视觉到语言生成学习。BLIP-2在各种视觉语言任务上实现了最先进的性能,尽管训练参数比现有方法要少得多。例如,我们的模型在零样本VQAv2任务中比Flamingo80B表现更好,训练参数少54倍。我们还展示了模型的零样本图像到文本生成新能力,可以遵循自然语言指令。
URL
https://arxiv.org/abs/2301.12597