Paper Reading AI Learner

RetinaNet: Reservoir-Enabled Time Integrated Attention Network for Event-based Video Processing

2023-03-19 21:20:45
Sangmin Yoo, Eric Yeu-Jer Lee, Ziyu Wang, Xinxin Wang, Wei D. Lu

Abstract

Event-based cameras are inspired by the sparse and asynchronous spike representation of the biological visual system. However, processing the even data requires either using expensive feature descriptors to transform spikes into frames, or using spiking neural networks that are difficult to train. In this work, we propose a neural network architecture based on simple convolution layers integrated with dynamic temporal encoding reservoirs with low hardware and training costs. The Reservoir-enabled Time Integrated Attention Network (RetinaNet) allows the network to efficiently process asynchronous temporal features, and achieves the highest accuracy of 99.2% for DVS128 Gesture reported to date, and one of the highest accuracy of 67.5% for DVS Lip dataset at a much smaller network size. By leveraging the internal dynamics of memristors, asynchronous temporal feature encoding can be implemented at very low hardware cost without preprocessing or dedicated memory and arithmetic units. The use of simple DNN blocks and backpropagation based training rules further reduces its implementation cost. Code will be publicly available.

Abstract (translated)

基于事件的窗户被灵感启发来自于生物视觉系统的稀疏和异步 spike 表示。然而,处理 even 数据需要使用昂贵的特征描述器将 spike 转换为帧,或者使用难以训练的 spike 神经网络。在本研究中,我们提出了一种基于简单卷积层和动态时间编码储备库的神经网络架构,该架构具有低硬件和训练成本。该储备库启用的时间集成注意网络(RetinaNet)使网络能够高效处理异步时间特征,并且对于截至日期 DVS128 手势数据的最高准确率达到了 99.2%,而对于更小的 DVS Lip 数据集,其最高准确率达到了 67.5%。通过利用电容器的内部动态,异步时间特征编码可以在无需预处理或专用内存和算术单元的情况下以非常低的硬件成本实现。使用简单的 DNN 块和反向传播训练规则进一步减少了其实现成本。代码将公开可用。

URL

https://arxiv.org/abs/2303.10770

PDF

https://arxiv.org/pdf/2303.10770.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot