Abstract
Text2Motion aims to generate human motions from texts. Existing datasets rely on the assumption that texts include action labels (such as "walk, bend, and pick up"), which is not flexible for practical scenarios. This paper redefines this problem with a more realistic assumption that the texts are arbitrary. Specifically, arbitrary texts include existing action texts composed of action labels (e.g., A person walks and bends to pick up something), and introduce scene texts without explicit action labels (e.g., A person notices his wallet on the ground ahead). To bridge the gaps between this realistic setting and existing datasets, we expand the action texts on the HumanML3D dataset to more scene texts, thereby creating a new HumanML3D++ dataset including arbitrary texts. In this challenging dataset, we benchmark existing state-of-the-art methods and propose a novel two-stage framework to extract action labels from arbitrary texts by the Large Language Model (LLM) and then generate motions from action labels. Extensive experiments are conducted under different application scenarios to validate the effectiveness of the proposed framework on existing and proposed datasets. The results indicate that Text2Motion in this realistic setting is very challenging, fostering new research in this practical direction. Our dataset and code will be released.
Abstract (translated)
Text2Motion旨在从文本中生成人类动作。现有的数据集依赖于假设文本包括动作标签(如“步行,弯曲和捡起”),这并不灵活,因为实际场景中这种情况并不总是适用。本文通过更现实地假设文本是随机的,重新定义了这个问题。具体来说,随机文本包括由动作标签组成的现有动作文本(例如,一个人步行和弯曲来捡起东西),并引入没有明确动作标签的场景文本(例如,一个人注意到他面前的地面上有一张钞票)。为了在现实设置和现有数据集之间弥合差距,我们在HumanML3D数据集上扩展了动作文本,从而创建了一个包含任意文本的新HumanML3D++数据集。在这个具有挑战性的数据集中,我们基准了现有的最先进的方法,并提出了一个新颖的两阶段框架,通过Large Language Model(LLM)从任意文本中提取动作标签,然后生成动作。在不同的应用场景下进行了广泛的实验,以验证所提出的框架在现有和假设数据集上的有效性。结果表明,在现实设置下,Text2Motion非常具有挑战性,推动了这一领域的新研究。我们的数据集和代码将公开发布。
URL
https://arxiv.org/abs/2404.14745