Abstract
In this study, we address the importance of modeling behavior style in virtual agents for personalized human-agent interaction. We propose a machine learning approach to synthesize gestures, driven by prosodic features and text, in the style of different speakers, even those unseen during training. Our model incorporates zero-shot multimodal style transfer using multimodal data from the PATS database, which contains videos of diverse speakers. We recognize style as a pervasive element during speech, influencing the expressivity of communicative behaviors, while content is conveyed through multimodal signals and text. By disentangling content and style, we directly infer the style embedding, even for speakers not included in the training phase, without the need for additional training or fine-tuning. Objective and subjective evaluations are conducted to validate our approach and compare it against two baseline methods.
Abstract (translated)
本研究探讨了在虚拟代理中建模行为风格对于个性化人类代理交互的重要性。我们提出了一种机器学习方法,用于合成不同说话者的风格,即使培训过程中未曾见过他们。我们的模型使用从PATS数据库中收集的多种模式数据进行零次多方模式转移,该数据库包含多种不同说话者的视频。我们认识到风格在演讲中是一个普遍的元素,影响交流行为的表达力,而内容通过多种模式信号和文本传达。通过分离内容和风格,我们可以直接推断风格嵌入,即使不同说话者不在训练阶段,也不需要额外的培训和微调。客观和主观评估用于验证我们的方法和比较两个基准方法。
URL
https://arxiv.org/abs/2305.12887