Paper Reading AI Learner

TemporalMaxer: Maximize Temporal Context with only Max Pooling for Temporal Action Localization

2023-03-16 03:11:26
Tuan N. Tang, Kwonyoung Kim, Kwanghoon Sohn

Abstract

Temporal Action Localization (TAL) is a challenging task in video understanding that aims to identify and localize actions within a video sequence. Recent studies have emphasized the importance of applying long-term temporal context modeling (TCM) blocks to the extracted video clip features such as employing complex self-attention mechanisms. In this paper, we present the simplest method ever to address this task and argue that the extracted video clip features are already informative to achieve outstanding performance without sophisticated architectures. To this end, we introduce TemporalMaxer, which minimizes long-term temporal context modeling while maximizing information from the extracted video clip features with a basic, parameter-free, and local region operating max-pooling block. Picking out only the most critical information for adjacent and local clip embeddings, this block results in a more efficient TAL model. We demonstrate that TemporalMaxer outperforms other state-of-the-art methods that utilize long-term TCM such as self-attention on various TAL datasets while requiring significantly fewer parameters and computational resources. The code for our approach is publicly available at this https URL

Abstract (translated)

时间动作定位(TAL)是在视频理解中具有挑战性的任务,旨在识别和定位视频序列中的行动。最近的研究表明,应用长期时间上下文建模(TCM)块提取提取的视频片段特征,如使用复杂的自注意力机制非常重要。在本文中,我们介绍了解决此任务最简单的方法,并认为提取的视频片段特征已经具有信息,在没有 sophisticated 架构的情况下,以出色的性能为目标。为此,我们介绍了TemporalMaxer,该方法最小化长期时间上下文建模,同时最大化从提取的视频片段特征中获取的信息,使用基本、参数免费的局部区域最大池化块。仅选择相邻和局部片段嵌入的最重要信息,该块生成更高效的TAL模型。我们证明了TemporalMaxer在多个TAL数据集上优于其他利用长期TCM的先进技术,如自注意力,在各种TAL数据集上表现优异,同时只需要较少的参数和计算资源。我们的算法代码在此https URL上公开可用。

URL

https://arxiv.org/abs/2303.09055

PDF

https://arxiv.org/pdf/2303.09055.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot