Paper Reading AI Learner

ChatHuman: Language-driven 3D Human Understanding with Retrieval-Augmented Tool Reasoning

2024-05-07 17:59:31
Jing Lin, Yao Feng, Weiyang Liu, Michael J. Black

Abstract

Numerous methods have been proposed to detect, estimate, and analyze properties of people in images, including the estimation of 3D pose, shape, contact, human-object interaction, emotion, and more. Each of these methods works in isolation instead of synergistically. Here we address this problem and build a language-driven human understanding system -- ChatHuman, which combines and integrates the skills of many different methods. To do so, we finetune a Large Language Model (LLM) to select and use a wide variety of existing tools in response to user inputs. In doing so, ChatHuman is able to combine information from multiple tools to solve problems more accurately than the individual tools themselves and to leverage tool output to improve its ability to reason about humans. The novel features of ChatHuman include leveraging academic publications to guide the application of 3D human-related tools, employing a retrieval-augmented generation model to generate in-context-learning examples for handling new tools, and discriminating and integrating tool results to enhance 3D human understanding. Our experiments show that ChatHuman outperforms existing models in both tool selection accuracy and performance across multiple 3D human-related tasks. ChatHuman is a step towards consolidating diverse methods for human analysis into a single, powerful, system for 3D human reasoning.

Abstract (translated)

已有许多方法被提出用于检测、估计和分析图像中的人,包括估计3D姿势、形状、接触、人机交互、情绪等。这些方法彼此独立工作而不是协同工作。我们解决这个问题,并构建了一个语言驱动的人理解系统--ChatHuman,它结合和整合了许多不同方法的技能。为此,我们通过微调一个大语言模型(LLM)来选择和使用广泛的现有工具来响应用户输入。这样做,ChatHuman能够将多个工具的信息结合在一起,使其更准确地解决问题,并利用工具的输出来提高其对人类的推理能力。ChatHuman的新特点包括利用学术出版物指导3D与人相关的工具的应用,采用检索增强生成模型生成处理新工具的上下文学习示例,以及通过区分和整合工具结果来增强3D人类理解。我们的实验证明,ChatHuman在工具选择准确性和多个3D人机交互任务中的表现优于现有模型。ChatHuman是将多样化方法合并为单个、强大的3D人类推理系统的一步。

URL

https://arxiv.org/abs/2405.04533

PDF

https://arxiv.org/pdf/2405.04533.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot