Abstract
We present a zero-shot pose optimization method that enforces accurate physical contact constraints when estimating the 3D pose of humans. Our central insight is that since language is often used to describe physical interaction, large pretrained text-based models can act as priors on pose estimation. We can thus leverage this insight to improve pose estimation by converting natural language descriptors, generated by a large multimodal model (LMM), into tractable losses to constrain the 3D pose optimization. Despite its simplicity, our method produces surprisingly compelling pose reconstructions of people in close contact, correctly capturing the semantics of the social and physical interactions. We demonstrate that our method rivals more complex state-of-the-art approaches that require expensive human annotation of contact points and training specialized models. Moreover, unlike previous approaches, our method provides a unified framework for resolving self-contact and person-to-person contact.
Abstract (translated)
我们提出了一种零 shot姿态优化方法,在估计人类3D姿态时强制准确的身体接触约束。我们核心的见解是,由于语言通常用来描述物理交互,因此大型预训练文本模型可以作为姿态估计的先验。因此,我们可以利用这个见解来通过将自然语言描述符转换为可求解的损失来约束3D姿态优化,从而提高姿态估计。尽管我们的方法很简单,但通过将大型多模态模型(LMM)生成的自然语言描述符转换为可求解的损失,我们成功地捕捉到了社会和物理交互的语义。我们证明了我们的方法与需要昂贵的人类标注接触点和训练专用模型的更复杂方法匹敌。此外,与以前的方法不同,我们的方法提供了一个统一的框架来解决自接触和人与人接触。
URL
https://arxiv.org/abs/2405.03689