Paper Reading AI Learner

A Progressive Framework of Vision-language Knowledge Distillation and Alignment for Multilingual Scene

2024-04-17 10:56:06
Wenbo Zhang, Yifan Zhang, Jianfeng Lin, Binqiang Huang, Jinlu Zhang, Wenhao Yu

Abstract

Pre-trained vision-language (V-L) models such as CLIP have shown excellent performance in many downstream cross-modal tasks. However, most of them are only applicable to the English context. Subsequent research has focused on this problem and proposed improved models, such as CN-CLIP and AltCLIP, to facilitate their applicability to Chinese and even other languages. Nevertheless, these models suffer from high latency and a large memory footprint in inference, which limits their further deployment on resource-constrained edge devices. In this work, we propose a conceptually simple yet effective multilingual CLIP Compression framework and train a lightweight multilingual vision-language model, called DC-CLIP, for both Chinese and English context. In this framework, we collect high-quality Chinese and English text-image pairs and design two training stages, including multilingual vision-language feature distillation and alignment. During the first stage, lightweight image/text student models are designed to learn robust visual/multilingual textual feature representation ability from corresponding teacher models, respectively. Subsequently, the multilingual vision-language alignment stage enables effective alignment of visual and multilingual textual features to further improve the model's multilingual performance. Comprehensive experiments in zero-shot image classification, conducted based on the ELEVATER benchmark, showcase that DC-CLIP achieves superior performance in the English context and competitive performance in the Chinese context, even with less training data, when compared to existing models of similar parameter magnitude. The evaluation demonstrates the effectiveness of our designed training mechanism.

Abstract (translated)

预训练的视觉语言(V-L)模型,如CLIP,已经在许多下游跨模态任务中表现出优异性能。然而,大多数模型仅适用于英语环境。后续研究集中于解决这个问题,并提出了一些改进的模型,如CN-CLIP和AltCLIP,以促进其适用于中文和其他语言环境。然而,这些模型在推理时存在较高的延迟和高内存开销,这限制了它们在资源受限的边缘设备上的进一步部署。在这项工作中,我们提出了一个概念简单但有效的多语言CLIP压缩框架,并训练了一个轻量级的多语言视觉语言模型,称为DC-CLIP,用于中文和英语环境。在这个框架中,我们收集了高质量的中国和英语文本图像对,并设计了两阶段训练,包括多语言视觉语言特征提取和对齐。在第一阶段,我们设计了一系列轻量级的图像/文本学生模型,以从相应的主模型中学到稳健的视觉/多语言文本特征表示能力。随后,多语言视觉语言对齐阶段使视觉和多语言文本特征之间进行有效对齐,从而进一步提高模型的多语言性能。基于ELEVATER基准的零散图像分类实验展示了DC-CLIP在英语环境和类似参数规模的现有模型中具有卓越的性能,即使训练数据较少,也能在中文环境中实现竞争力的性能。评估证明了我们设计的训练机制的有效性。

URL

https://arxiv.org/abs/2404.11249

PDF

https://arxiv.org/pdf/2404.11249.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot