Paper Reading AI Learner

EMO-X: Efficient Multi-Person Pose and Shape Estimation in One-Stage

2025-04-11 17:30:46
Haohang Jian, Jinlu Zhang, Junyi Wu, Zhigang Tu

Abstract

Expressive Human Pose and Shape Estimation (EHPS) aims to jointly estimate human pose, hand gesture, and facial expression from monocular images. Existing methods predominantly rely on Transformer-based architectures, which suffer from quadratic complexity in self-attention, leading to substantial computational overhead, especially in multi-person scenarios. Recently, Mamba has emerged as a promising alternative to Transformers due to its efficient global modeling capability. However, it remains limited in capturing fine-grained local dependencies, which are essential for precise EHPS. To address these issues, we propose EMO-X, the Efficient Multi-person One-stage model for multi-person EHPS. Specifically, we explore a Scan-based Global-Local Decoder (SGLD) that integrates global context with skeleton-aware local features to iteratively enhance human tokens. Our EMO-X leverages the superior global modeling capability of Mamba and designs a local bidirectional scan mechanism for skeleton-aware local refinement. Comprehensive experiments demonstrate that EMO-X strikes an excellent balance between efficiency and accuracy. Notably, it achieves a significant reduction in computational complexity, requiring 69.8% less inference time compared to state-of-the-art (SOTA) methods, while outperforming most of them in accuracy.

Abstract (translated)

表达式人体姿态和形状估计(EHPS)的目标是从单目图像中联合估计人体姿势、手势以及面部表情。现有的方法主要依赖于基于Transformer的架构,但自注意力机制在多个人体场景中的复杂度为二次方级,导致了巨大的计算开销。最近,Mamba作为一种替代Transformer的有效全局建模工具崭露头角,但由于其难以捕捉精细局部依赖性(这在精确EHPS中至关重要),因此仍有局限性。为了克服这些问题,我们提出了EMO-X——一种用于多个人体EHPS的高效单阶段模型。 具体而言,我们探索了一种基于扫描的全局-局部解码器(SGLD),该解码器结合了全球上下文和骨骼感知局部特征,以迭代增强人类令牌。我们的EMO-X利用Mamba优秀的全局建模能力,并设计了一个针对骨骼感知局部细化的双向扫描机制。 综合实验表明,EMO-X在效率与准确性之间取得了卓越的平衡。尤其值得注意的是,它显著减少了计算复杂度,在推理时间上比最先进的(SOTA)方法缩短了69.8%,同时在精度上超越了许多同类方法。

URL

https://arxiv.org/abs/2504.08718

PDF

https://arxiv.org/pdf/2504.08718.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot