Paper Reading AI Learner

What Makes for Good Visual Tokenizers for Large Language Models?

2023-05-20 16:11:26
Guangzhi Wang, Yixiao Ge, Xiaohan Ding, Mohan Kankanhalli, Ying Shan

Abstract

We empirically investigate proper pre-training methods to build good visual tokenizers, making Large Language Models (LLMs) powerful Multimodal Large Language Models (MLLMs). In our benchmark, which is curated to evaluate MLLMs visual semantic understanding and fine-grained perception capabilities, we discussed different visual tokenizers pre-trained with dominant methods (i.e., DeiT, CLIP, MAE, DINO), and observe that: i) Fully/weakly supervised models capture more semantics than self-supervised models, but the gap is narrowed by scaling up the pre-training dataset. ii) Self-supervised models are better at fine-grained perception, where patch-level supervision is particularly effective. iii) Tuning the visual tokenizer leads to the loss of semantics obtained from large-scale pretraining, which is unfavorable with relatively small-scale instruction-tuning dataset. Given the findings, we reviewed methods that attempted to unify semantics and fine-grained visual understanding, e.g., patch-level feature distillation with semantically-rich targets. We obtain an intriguing insight mask-based strategies that were once all the rage may not be applicable for obtaining good visual tokenizers. Based on this critical observation, we obtain a new MLLM equipped with a tailored Good Visual Tokenizer (GVT), which exhibits strong visual comprehension capability at multiple scales. In particular, without introducing extra parameters and task-specific fine-tuning, GVT achieves superior performance on visual question answering, image captioning, and other fine-grained visual understanding tasks such as object counting and multi-class identification.

Abstract (translated)

我们通过经验研究适当的预训练方法来构建好的视觉分词器,使大型语言模型(LLM)成为强大的多模态大型语言模型(MLLM)。在我们的基准中,我们讨论了使用主要方法预训练的不同视觉分词器,例如DeiT、CLIP、MAE和DiNO,并观察到了以下几点:第一,完全/弱监督模型能够捕获更多的语义,而自我监督模型则能够更好地进行精细感知,特别是在 patch-level 监督特别有效的情况下。第三,调整视觉分词器会导致从大规模预训练中获得的语义丢失,这与相对较小的指导微调数据集不太有利。基于这些发现,我们审查了试图统一语义和精细视觉理解的方法,例如 patch-level 特征蒸馏与语义丰富的目标。我们获得了令人感兴趣的洞察力,遮蔽策略,这些策略曾经非常流行,但可能不适用于获得好的视觉分词器。基于这一关键观察,我们获得了一个新的 MLLM 带有定制好的好的视觉分词器(GVT),表现出强大的多尺度视觉理解能力。特别是在没有引入额外的参数和任务特定的 fine-tuning 的情况下,GVT 在视觉问答、图像描述和其他精细的视觉理解任务(如物体计数和多类识别)中表现出卓越的性能。

URL

https://arxiv.org/abs/2305.12223

PDF

https://arxiv.org/pdf/2305.12223.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot