Abstract
This paper addresses the problem of generating lifelike holistic co-speech motions for 3D avatars, focusing on two key aspects: variability and coordination. Variability allows the avatar to exhibit a wide range of motions even with similar speech content, while coordination ensures a harmonious alignment among facial expressions, hand gestures, and body poses. We aim to achieve both with ProbTalk, a unified probabilistic framework designed to jointly model facial, hand, and body movements in speech. ProbTalk builds on the variational autoencoder (VAE) architecture and incorporates three core designs. First, we introduce product quantization (PQ) to the VAE, which enriches the representation of complex holistic motion. Second, we devise a novel non-autoregressive model that embeds 2D positional encoding into the product-quantized representation, thereby preserving essential structure information of the PQ codes. Last, we employ a secondary stage to refine the preliminary prediction, further sharpening the high-frequency details. Coupling these three designs enables ProbTalk to generate natural and diverse holistic co-speech motions, outperforming several state-of-the-art methods in qualitative and quantitative evaluations, particularly in terms of realism. Our code and model will be released for research purposes at this https URL.
Abstract (translated)
本文解决了为3D虚拟角色生成逼真度高的整体协同运动的问题,重点关注两个关键方面:可变性和协调性。可变性使得虚拟角色即使拥有相似的语音内容,也能表现出广泛的动作,而协调性确保了面部表情、手势和身体姿态之间的和谐对齐。我们希望通过ProbTalk,一个为共同建模面部、手和身体运动而设计的统一概率框架来实现这一目标。ProbTalk借鉴了变分自编码器(VAE)架构,并包括三个核心设计。首先,我们将产品量化(PQ)引入VAE,从而丰富复杂整体运动的表示。其次,我们设计了一个新型的非自回归模型,将2D位置编码嵌入产品量化表示中,从而保留PQ代码的 essential结构信息。最后,我们采用二级阶段来微调初步预测,进一步锐化高频细节。将这三个设计相结合使得ProbTalk能够生成自然且多样化的整体协同运动,在质量和数量评估中超过了最先进的方法,尤其是在逼真度方面。我们的代码和模型将在此处https://url.com/释放研究用途。
URL
https://arxiv.org/abs/2404.00368