Abstract
Automating the synthesis of coordinated bimanual piano performances poses significant challenges, particularly in capturing the intricate choreography between the hands while preserving their distinct kinematic signatures. In this paper, we propose a dual-stream neural framework designed to generate synchronized hand gestures for piano playing from audio input, addressing the critical challenge of modeling both hand independence and coordination. Our framework introduces two key innovations: (i) a decoupled diffusion-based generation framework that independently models each hand's motion via dual-noise initialization, sampling distinct latent noise for each while leveraging a shared positional condition, and (ii) a Hand-Coordinated Asymmetric Attention (HCAA) mechanism suppresses symmetric (common-mode) noise to highlight asymmetric hand-specific features, while adaptively enhancing inter-hand coordination during denoising. The system operates hierarchically: it first predicts 3D hand positions from audio features and then generates joint angles through position-aware diffusion models, where parallel denoising streams interact via HCAA. Comprehensive evaluations demonstrate that our framework outperforms existing state-of-the-art methods across multiple metrics.
Abstract (translated)
自动化合成协调的双臂钢琴表演面临重大挑战,特别是在捕捉双手之间的复杂编排的同时保持它们各自的运动特征。本文提出了一种双流神经框架,该框架旨在从音频输入中生成同步的手部动作,以解决同时建模手部独立性和协调性的关键问题。我们的框架引入了两个关键创新: (i) 一个解耦的基于扩散的生成框架,通过双重噪声初始化分别对每只手的动作进行独立建模,在共享位置条件的基础上为每一只手采样不同的潜在噪声。 (ii) 手部协同不对称注意力(HCAA)机制抑制对称(共同模式)噪声,突出不对称的手特定特征,并在去噪过程中自适应增强双手之间的协调性。系统分层操作:首先从音频特性中预测3D手位,然后通过位置感知扩散模型生成关节角度,在此过程中并行的去噪流通过HCAA进行交互。 全面评估表明,我们的框架在多个指标上优于现有的最先进的方法。
URL
https://arxiv.org/abs/2504.09885