Abstract
In this report, we present our champion solution to the WSDM2023 Toloka Visual Question Answering (VQA) Challenge. Different from the common VQA and visual grounding (VG) tasks, this challenge involves a more complex scenario, i.e. inferring and locating the object implicitly specified by the given interrogative question. For this task, we leverage ViT-Adapter, a pre-training-free adapter network, to adapt multi-modal pre-trained Uni-Perceiver for better cross-modal localization. Our method ranks first on the leaderboard, achieving 77.5 and 76.347 IoU on public and private test sets, respectively. It shows that ViT-Adapter is also an effective paradigm for adapting the unified perception model to vision-language downstream tasks. Code and models will be released at this https URL.
Abstract (translated)
在本报告中,我们介绍了我们 WSDM2023 Toloka Visual Question Answering (VQA) Challenge 的冠军解决方案。与常见的 VQA 和视觉ground(VG)任务不同,这个挑战涉及了一个更为复杂的场景,即推断和定位给定问题中隐含的对象。为此,我们利用 ViT-Adapter,一个无预训练网络适配器,将多模态预训练的 Uni-Perceiver 适配器用于更好的跨模态定位。我们的算法在公开和私有测试集上分别取得了 77.5 和 76.347 的IoU,表明 ViT-Adapter 也是一个有效的范式,用于将统一感知模型适应到视觉语言下游任务中。代码和模型将在这个 https URL 上发布。
URL
https://arxiv.org/abs/2301.09045