Paper Reading AI Learner

Spatial-Temporal Perception with Causal Inference for Naturalistic Driving Action Recognition

2025-03-06 04:28:11
Qing Chang, Wei Dai, Zhihao Shuai, Limin Yu, Yutao Yue

Abstract

Naturalistic driving action recognition is essential for vehicle cabin monitoring systems. However, the complexity of real-world backgrounds presents significant challenges for this task, and previous approaches have struggled with practical implementation due to their limited ability to observe subtle behavioral differences and effectively learn inter-frame features from video. In this paper, we propose a novel Spatial-Temporal Perception (STP) architecture that emphasizes both temporal information and spatial relationships between key objects, incorporating a causal decoder to perform behavior recognition and temporal action localization. Without requiring multimodal input, STP directly extracts temporal and spatial distance features from RGB video clips. Subsequently, these dual features are jointly encoded by maximizing the expected likelihood across all possible permutations of the factorization order. By integrating temporal and spatial features at different scales, STP can perceive subtle behavioral changes in challenging scenarios. Additionally, we introduce a causal-aware module to explore relationships between video frame features, significantly enhancing detection efficiency and performance. We validate the effectiveness of our approach using two publicly available driver distraction detection benchmarks. The results demonstrate that our framework achieves state-of-the-art performance.

Abstract (translated)

自然驾驶行为识别对于车辆舱内监控系统至关重要。然而,现实世界的复杂背景为这一任务带来了显著挑战,以往的方法由于观察细微行为差异的能力有限以及难以从视频中有效学习帧间特征,在实际应用中遇到了困难。为此,本文提出了一种新颖的空间-时间感知(STP)架构,该架构强调了关键对象之间的时间信息和空间关系,并引入了一个因果解码器来执行行为识别及时间动作定位。无需多模态输入,STP可以直接从RGB视频片段中提取时间和空间距离特征。随后,通过最大化因子分解顺序的所有可能排列的预期似然性,这些双重视觉特征被联合编码。通过在不同尺度上整合时空特性,STP能够感知挑战场景中的细微行为变化。此外,我们引入了一个因果意识模块来探索视频帧特征之间的关系,显著提高了检测效率和性能。我们使用两个公开可用的驾驶员分心检测基准数据集验证了所提出方法的有效性。实验结果表明,我们的框架达到了最先进的性能水平。

URL

https://arxiv.org/abs/2503.04078

PDF

https://arxiv.org/pdf/2503.04078.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot