Abstract
Gesture synthesis is a vital realm of human-computer interaction, with wide-ranging applications across various fields like film, robotics, and virtual reality. Recent advancements have utilized the diffusion model and attention mechanisms to improve gesture synthesis. However, due to the high computational complexity of these techniques, generating long and diverse sequences with low latency remains a challenge. We explore the potential of state space models (SSMs) to address the challenge, implementing a two-stage modeling strategy with discrete motion priors to enhance the quality of gestures. Leveraging the foundational Mamba block, we introduce MambaTalk, enhancing gesture diversity and rhythm through multimodal integration. Extensive experiments demonstrate that our method matches or exceeds the performance of state-of-the-art models.
Abstract (translated)
手势合成是一个关键的人机交互领域,涉及各种领域,如电影、机器人学和虚拟现实。最近的技术发展利用扩散模型和注意机制来提高手势合成。然而,由于这些技术的高计算复杂性,生成具有低延迟的长而多样序列仍然具有挑战性。我们探讨了状态空间模型的潜力来解决这个挑战,通过离散运动优先级来增强手势的质量。利用发现的Mamba块,我们引入了MambaTalk,通过多模态融合来增强手势的多样性和节奏。大量实验证明,我们的方法与最先进模型的性能相匹敌或超过。
URL
https://arxiv.org/abs/2403.09471