Paper Reading AI Learner

Spatiotemporal Attention-based Semantic Compression for Real-time Video Recognition

2023-05-22 07:47:27
Nan Li, Mehdi Bennis, Alexandros Iosifidis, Qi Zhang

Abstract

This paper studies the computational offloading of video action recognition in edge computing. To achieve effective semantic information extraction and compression, following semantic communication we propose a novel spatiotemporal attention-based autoencoder (STAE) architecture, including a frame attention module and a spatial attention module, to evaluate the importance of frames and pixels in each frame. Additionally, we use entropy encoding to remove statistical redundancy in the compressed data to further reduce communication overhead. At the receiver, we develop a lightweight decoder that leverages a 3D-2D CNN combined architecture to reconstruct missing information by simultaneously learning temporal and spatial information from the received data to improve accuracy. To fasten convergence, we use a step-by-step approach to train the resulting STAE-based vision transformer (ViT_STAE) models. Experimental results show that ViT_STAE can compress the video dataset HMDB51 by 104x with only 5% accuracy loss, outperforming the state-of-the-art baseline DeepISC. The proposed ViT_STAE achieves faster inference and higher accuracy than the DeepISC-based ViT model under time-varying wireless channel, which highlights the effectiveness of STAE in guaranteeing higher accuracy under time constraints.

Abstract (translated)

本论文研究的是边缘计算中视频行动识别的计算负载问题。为了有效地提取和压缩语义信息,我们提出了一种新的基于时间空间注意力的自编码器(STAE)架构,包括帧注意力模块和空间注意力模块,以评估每个帧和像素的重要性。此外,我们还使用熵编码来去除压缩数据中的统计冗余,进一步减少通信 overhead。在接收端,我们开发了一个轻量化解码器,利用3D-2D卷积神经网络综合架构,从接收数据中同时学习时间空间和空间信息,以提高精度。为了加速收敛,我们采用了一步一迭代的方法来训练产生STAE-based视觉Transformer(ViT_STAE)模型。实验结果显示,ViT_STAE能够压缩HMDB51视频数据集,压缩率增加到104倍,而精度损失只有5%。相比之下,DeepISC-based ViT模型在时间 varying无线通道下的性能表现不如ViT_STAE,这表明了STAE在满足时间限制条件下保证更高精度的有效性。

URL

https://arxiv.org/abs/2305.12796

PDF

https://arxiv.org/pdf/2305.12796.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot