Abstract
Numerous methods have been proposed to detect, estimate, and analyze properties of people in images, including the estimation of 3D pose, shape, contact, human-object interaction, emotion, and more. Each of these methods works in isolation instead of synergistically. Here we address this problem and build a language-driven human understanding system -- ChatHuman, which combines and integrates the skills of many different methods. To do so, we finetune a Large Language Model (LLM) to select and use a wide variety of existing tools in response to user inputs. In doing so, ChatHuman is able to combine information from multiple tools to solve problems more accurately than the individual tools themselves and to leverage tool output to improve its ability to reason about humans. The novel features of ChatHuman include leveraging academic publications to guide the application of 3D human-related tools, employing a retrieval-augmented generation model to generate in-context-learning examples for handling new tools, and discriminating and integrating tool results to enhance 3D human understanding. Our experiments show that ChatHuman outperforms existing models in both tool selection accuracy and performance across multiple 3D human-related tasks. ChatHuman is a step towards consolidating diverse methods for human analysis into a single, powerful, system for 3D human reasoning.
Abstract (translated)
已有许多方法被提出用于检测、估计和分析图像中的人,包括估计3D姿势、形状、接触、人机交互、情绪等。这些方法彼此独立工作而不是协同工作。我们解决这个问题,并构建了一个语言驱动的人理解系统--ChatHuman,它结合和整合了许多不同方法的技能。为此,我们通过微调一个大语言模型(LLM)来选择和使用广泛的现有工具来响应用户输入。这样做,ChatHuman能够将多个工具的信息结合在一起,使其更准确地解决问题,并利用工具的输出来提高其对人类的推理能力。ChatHuman的新特点包括利用学术出版物指导3D与人相关的工具的应用,采用检索增强生成模型生成处理新工具的上下文学习示例,以及通过区分和整合工具结果来增强3D人类理解。我们的实验证明,ChatHuman在工具选择准确性和多个3D人机交互任务中的表现优于现有模型。ChatHuman是将多样化方法合并为单个、强大的3D人类推理系统的一步。
URL
https://arxiv.org/abs/2405.04533