Abstract
This paper studies the computational offloading of video action recognition in edge computing. To achieve effective semantic information extraction and compression, following semantic communication we propose a novel spatiotemporal attention-based autoencoder (STAE) architecture, including a frame attention module and a spatial attention module, to evaluate the importance of frames and pixels in each frame. Additionally, we use entropy encoding to remove statistical redundancy in the compressed data to further reduce communication overhead. At the receiver, we develop a lightweight decoder that leverages a 3D-2D CNN combined architecture to reconstruct missing information by simultaneously learning temporal and spatial information from the received data to improve accuracy. To fasten convergence, we use a step-by-step approach to train the resulting STAE-based vision transformer (ViT_STAE) models. Experimental results show that ViT_STAE can compress the video dataset HMDB51 by 104x with only 5% accuracy loss, outperforming the state-of-the-art baseline DeepISC. The proposed ViT_STAE achieves faster inference and higher accuracy than the DeepISC-based ViT model under time-varying wireless channel, which highlights the effectiveness of STAE in guaranteeing higher accuracy under time constraints.
Abstract (translated)
本论文研究的是边缘计算中视频行动识别的计算负载问题。为了有效地提取和压缩语义信息,我们提出了一种新的基于时间空间注意力的自编码器(STAE)架构,包括帧注意力模块和空间注意力模块,以评估每个帧和像素的重要性。此外,我们还使用熵编码来去除压缩数据中的统计冗余,进一步减少通信 overhead。在接收端,我们开发了一个轻量化解码器,利用3D-2D卷积神经网络综合架构,从接收数据中同时学习时间空间和空间信息,以提高精度。为了加速收敛,我们采用了一步一迭代的方法来训练产生STAE-based视觉Transformer(ViT_STAE)模型。实验结果显示,ViT_STAE能够压缩HMDB51视频数据集,压缩率增加到104倍,而精度损失只有5%。相比之下,DeepISC-based ViT模型在时间 varying无线通道下的性能表现不如ViT_STAE,这表明了STAE在满足时间限制条件下保证更高精度的有效性。
URL
https://arxiv.org/abs/2305.12796