Paper Reading AI Learner

SparseFormer: Sparse Visual Recognition via Limited Latent Tokens

2023-04-07 17:59:58
Ziteng Gao, Zhan Tong, Limin Wang, Mike Zheng Shou

Abstract

Human visual recognition is a sparse process, where only a few salient visual cues are attended to rather than traversing every detail uniformly. However, most current vision networks follow a dense paradigm, processing every single visual unit (e.g,, pixel or patch) in a uniform manner. In this paper, we challenge this dense paradigm and present a new method, coined SparseFormer, to imitate human's sparse visual recognition in an end-to-end manner. SparseFormer learns to represent images using a highly limited number of tokens (down to 49) in the latent space with sparse feature sampling procedure instead of processing dense units in the original pixel space. Therefore, SparseFormer circumvents most of dense operations on the image space and has much lower computational costs. Experiments on the ImageNet classification benchmark dataset show that SparseFormer achieves performance on par with canonical or well-established models while offering better accuracy-throughput tradeoff. Moreover, the design of our network can be easily extended to the video classification with promising performance at lower computational costs. We hope that our work can provide an alternative way for visual modeling and inspire further research on sparse neural architectures. The code will be publicly available at this https URL

Abstract (translated)

人类视觉识别是一种稀疏过程,只需要注意几个突出的视觉提示,而不是遍历每一个细节都均匀。然而,当前大多数视觉网络遵循密集范式,以相同的方式处理每一个视觉单元(例如像素或块),而不是在原始像素空间中处理密集单元。在本文中,我们挑战了这种密集范式,并提出了一种新的方法,称为稀疏前处理,以模仿人类的稀疏视觉识别,以端到端的方式实现。稀疏前处理使用非常受限的数量代币(甚至降至49)在稀疏特征采样程序中存在于潜在空间中,而不是在原始像素空间中处理密集单元。因此,稀疏前处理绕过了图像空间中大部分密集操作,具有更低的计算成本。在ImageNet分类基准数据集上的实验表明,稀疏前处理能够与标准或成熟的模型相当地表现,同时提供更好的精度与吞吐量权衡。此外,我们的网络设计可以轻松扩展到视频分类,表现出良好的性能,同时降低了计算成本。我们希望我们的工作可以为视觉建模提供另一种方式,并激励进一步研究稀疏神经网络架构。代码将在这个httpsURL上公开可用。

URL

https://arxiv.org/abs/2304.03768

PDF

https://arxiv.org/pdf/2304.03768.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot