Abstract
We introduce a video framework for modeling the association between verbal and non-verbal communication during dyadic conversation. Given the input speech of a speaker, our approach retrieves a video of a listener, who has facial expressions that would be socially appropriate given the context. Our approach further allows the listener to be conditioned on their own goals, personalities, or backgrounds. Our approach models conversations through a composition of large language models and vision-language models, creating internal representations that are interpretable and controllable. To study multimodal communication, we propose a new video dataset of unscripted conversations covering diverse topics and demographics. Experiments and visualizations show our approach is able to output listeners that are significantly more socially appropriate than baselines. However, many challenges remain, and we release our dataset publicly to spur further progress. See our website for video results, data, and code: this https URL.
Abstract (translated)
我们引入了一个视频框架,用于建模在二度对话中语言和非语言沟通之间的关联。给定发言者输入的说话内容,我们的算法可以从一个视频中检索一个听众的视频,该听众有根据上下文社交上合适的面部表情。我们的算法还允许听众根据自己的目标、个性或背景进行自我强化。我们的算法通过大型语言模型和视觉语言模型的组合来建模对话,并创造了可解释和控制的内部表示。为了研究多模态沟通,我们提出了一个涵盖各种主题和人口统计的新的视频数据集。实验和可视化结果表明,我们的算法能够输出比基准水平社交上更适当的听众。然而,还有很多挑战,因此我们公开发布我们的数据集以激励进一步进展。查看我们的网站以查看视频结果、数据和代码:此 https URL。
URL
https://arxiv.org/abs/2301.10939