Paper Reading AI Learner

TAAT: Think and Act from Arbitrary Texts in Text2Motion

2024-04-23 04:54:32
Runqi Wang, Caoyuan Ma, GuoPeng Li, Zheng Wang

Abstract

Text2Motion aims to generate human motions from texts. Existing datasets rely on the assumption that texts include action labels (such as "walk, bend, and pick up"), which is not flexible for practical scenarios. This paper redefines this problem with a more realistic assumption that the texts are arbitrary. Specifically, arbitrary texts include existing action texts composed of action labels (e.g., A person walks and bends to pick up something), and introduce scene texts without explicit action labels (e.g., A person notices his wallet on the ground ahead). To bridge the gaps between this realistic setting and existing datasets, we expand the action texts on the HumanML3D dataset to more scene texts, thereby creating a new HumanML3D++ dataset including arbitrary texts. In this challenging dataset, we benchmark existing state-of-the-art methods and propose a novel two-stage framework to extract action labels from arbitrary texts by the Large Language Model (LLM) and then generate motions from action labels. Extensive experiments are conducted under different application scenarios to validate the effectiveness of the proposed framework on existing and proposed datasets. The results indicate that Text2Motion in this realistic setting is very challenging, fostering new research in this practical direction. Our dataset and code will be released.

Abstract (translated)

Text2Motion旨在从文本中生成人类动作。现有的数据集依赖于假设文本包括动作标签(如“步行,弯曲和捡起”),这并不灵活,因为实际场景中这种情况并不总是适用。本文通过更现实地假设文本是随机的,重新定义了这个问题。具体来说,随机文本包括由动作标签组成的现有动作文本(例如,一个人步行和弯曲来捡起东西),并引入没有明确动作标签的场景文本(例如,一个人注意到他面前的地面上有一张钞票)。为了在现实设置和现有数据集之间弥合差距,我们在HumanML3D数据集上扩展了动作文本,从而创建了一个包含任意文本的新HumanML3D++数据集。在这个具有挑战性的数据集中,我们基准了现有的最先进的方法,并提出了一个新颖的两阶段框架,通过Large Language Model(LLM)从任意文本中提取动作标签,然后生成动作。在不同的应用场景下进行了广泛的实验,以验证所提出的框架在现有和假设数据集上的有效性。结果表明,在现实设置下,Text2Motion非常具有挑战性,推动了这一领域的新研究。我们的数据集和代码将公开发布。

URL

https://arxiv.org/abs/2404.14745

PDF

https://arxiv.org/pdf/2404.14745.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot