Abstract
Human visual recognition is a sparse process, where only a few salient visual cues are attended to rather than traversing every detail uniformly. However, most current vision networks follow a dense paradigm, processing every single visual unit (e.g,, pixel or patch) in a uniform manner. In this paper, we challenge this dense paradigm and present a new method, coined SparseFormer, to imitate human's sparse visual recognition in an end-to-end manner. SparseFormer learns to represent images using a highly limited number of tokens (down to 49) in the latent space with sparse feature sampling procedure instead of processing dense units in the original pixel space. Therefore, SparseFormer circumvents most of dense operations on the image space and has much lower computational costs. Experiments on the ImageNet classification benchmark dataset show that SparseFormer achieves performance on par with canonical or well-established models while offering better accuracy-throughput tradeoff. Moreover, the design of our network can be easily extended to the video classification with promising performance at lower computational costs. We hope that our work can provide an alternative way for visual modeling and inspire further research on sparse neural architectures. The code will be publicly available at this https URL
Abstract (translated)
人类视觉识别是一种稀疏过程,只需要注意几个突出的视觉提示,而不是遍历每一个细节都均匀。然而,当前大多数视觉网络遵循密集范式,以相同的方式处理每一个视觉单元(例如像素或块),而不是在原始像素空间中处理密集单元。在本文中,我们挑战了这种密集范式,并提出了一种新的方法,称为稀疏前处理,以模仿人类的稀疏视觉识别,以端到端的方式实现。稀疏前处理使用非常受限的数量代币(甚至降至49)在稀疏特征采样程序中存在于潜在空间中,而不是在原始像素空间中处理密集单元。因此,稀疏前处理绕过了图像空间中大部分密集操作,具有更低的计算成本。在ImageNet分类基准数据集上的实验表明,稀疏前处理能够与标准或成熟的模型相当地表现,同时提供更好的精度与吞吐量权衡。此外,我们的网络设计可以轻松扩展到视频分类,表现出良好的性能,同时降低了计算成本。我们希望我们的工作可以为视觉建模提供另一种方式,并激励进一步研究稀疏神经网络架构。代码将在这个httpsURL上公开可用。
URL
https://arxiv.org/abs/2304.03768