Abstract
Transformer has been applied in the field of computer vision due to its excellent performance in natural language processing, surpassing traditional convolutional neural networks and achieving new state-of-the-art. ViT divides an image into several local patches, known as "visual sentences". However, the information contained in the image is vast and complex, and focusing only on the features at the "visual sentence" level is not enough. The features between local patches should also be taken into consideration. In order to achieve further improvement, the TNT model is proposed, whose algorithm further divides the image into smaller patches, namely "visual words," achieving more accurate results. The core of Transformer is the Multi-Head Attention mechanism, and traditional attention mechanisms ignore interactions across different attention heads. In order to reduce redundancy and improve utilization, we introduce the nested algorithm and apply the Nested-TNT to image classification tasks. The experiment confirms that the proposed model has achieved better classification performance over ViT and TNT, exceeding 2.25%, 1.1% on dataset CIFAR10 and 2.78%, 0.25% on dataset FLOWERS102 respectively.
Abstract (translated)
由于其在自然语言处理方面的卓越表现,Transformer 在计算机视觉领域得到了广泛应用,超越了传统的卷积神经网络,并达到了最先进的状态。ViT 将图像分割成几个局部区域,称为“视觉句子”。然而,图像中的信息非常丰富和复杂,仅关注“视觉句子”层面的特征是不够的。在局部区域之间也应该考虑特征。为了实现进一步的改进,我们提出了 TNT 模型,其算法将图像进一步分割成更小的区域,即“视觉词”,从而实现更准确的结果。Transformer 的核心是多头注意力机制,而传统的注意力机制忽略了不同注意头之间的交互。为了减少冗余并提高利用率,我们引入了嵌套算法,并将 Nested-TNT 应用于图像分类任务。实验证实,与 ViT 和 TNT 相比,所提出的模型在 CIFAR10 数据集和 FLOWERS102 数据集上获得了更好的分类性能,分别超过 2.25% 和 2.78%。
URL
https://arxiv.org/abs/2404.13434