Abstract
Visual information is central to conversation: body gestures and facial expressions, for example, contribute to meaning that transcends words alone. To date, however, most neural conversational models are limited to just text. We introduce CHAMPAGNE, a generative model of conversations that can account for visual contexts. To train CHAMPAGNE, we collect and release YTD-18M, a large-scale corpus of 18M video-based dialogues. YTD-18M is constructed from web videos: crucial to our data collection pipeline is a pretrained language model that converts error-prone automatic transcripts to a cleaner dialogue format while maintaining meaning. Human evaluation reveals that YTD-18M is more sensible and specific than prior resources (MMDialog, 1M dialogues), while maintaining visual-groundedness. Experiments demonstrate that 1) CHAMPAGNE learns to conduct conversation from YTD-18M; and 2) when fine-tuned, it achieves state-of-the-art results on four vision-language tasks focused on real-world conversations. We release data, models, and code at this https URL.
Abstract (translated)
视觉信息是对话的核心:身体手势和面部表情 contribute to meaning that transcends words alone. 然而,到目前为止,大多数神经网络对话模型仅限于文本。我们介绍了CHAMPagne,一个可以处理视觉上下文的对话生成模型。为了训练CHAMPagne,我们收集并发布了YTD-18M,一个大规模的基于视频的对话库。YTD-18M是从Web视频中构建的:我们的数据收集管道的关键是预训练的语言模型,它可以将错误率较高的自动转录转换为更清洁的对话格式,同时保持意义。人类评估表明,YTD-18M比先前的资源(MMDialog,1M对话)更加敏感和具体,同时保持视觉groundedness。实验表明,1) CHAMPagne从YTD-18M学习如何进行对话;2) 当优化时,它实现了针对现实世界对话的四种视觉语言任务最先进的结果。我们将数据、模型和代码放在这个httpsURL上发布。
URL
https://arxiv.org/abs/2303.09713