Paper Reading AI Learner

Nested-TNT: Hierarchical Vision Transformers with Multi-Scale Feature Processing

2024-04-20 17:56:14
Yuang Liu, Zhiheng Qiu, Xiaokai Qin

Abstract

Transformer has been applied in the field of computer vision due to its excellent performance in natural language processing, surpassing traditional convolutional neural networks and achieving new state-of-the-art. ViT divides an image into several local patches, known as "visual sentences". However, the information contained in the image is vast and complex, and focusing only on the features at the "visual sentence" level is not enough. The features between local patches should also be taken into consideration. In order to achieve further improvement, the TNT model is proposed, whose algorithm further divides the image into smaller patches, namely "visual words," achieving more accurate results. The core of Transformer is the Multi-Head Attention mechanism, and traditional attention mechanisms ignore interactions across different attention heads. In order to reduce redundancy and improve utilization, we introduce the nested algorithm and apply the Nested-TNT to image classification tasks. The experiment confirms that the proposed model has achieved better classification performance over ViT and TNT, exceeding 2.25%, 1.1% on dataset CIFAR10 and 2.78%, 0.25% on dataset FLOWERS102 respectively.

Abstract (translated)

由于其在自然语言处理方面的卓越表现,Transformer 在计算机视觉领域得到了广泛应用,超越了传统的卷积神经网络,并达到了最先进的状态。ViT 将图像分割成几个局部区域,称为“视觉句子”。然而,图像中的信息非常丰富和复杂,仅关注“视觉句子”层面的特征是不够的。在局部区域之间也应该考虑特征。为了实现进一步的改进,我们提出了 TNT 模型,其算法将图像进一步分割成更小的区域,即“视觉词”,从而实现更准确的结果。Transformer 的核心是多头注意力机制,而传统的注意力机制忽略了不同注意头之间的交互。为了减少冗余并提高利用率,我们引入了嵌套算法,并将 Nested-TNT 应用于图像分类任务。实验证实,与 ViT 和 TNT 相比,所提出的模型在 CIFAR10 数据集和 FLOWERS102 数据集上获得了更好的分类性能,分别超过 2.25% 和 2.78%。

URL

https://arxiv.org/abs/2404.13434

PDF

https://arxiv.org/pdf/2404.13434.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot