Abstract
The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. However, while the publicly available CLIP models are mostly pretrained on English data, it is hard to search for a CLIP pretrained on Chinese data. We assume that pretraining a Chinese CLIP is essential to research and industry for the following reasons. First, it can benefit the vision-language retrieval in Chinese and thus promote the language-specific multimodal representation learning. Second, the distribution of images in Chinese websites should be different from that of images in English websites. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Furthermore, through the ablation study we show that the two-stage pretraining method is the most effective compared with the other options. We release our code in this https URL
Abstract (translated)
URL
https://arxiv.org/abs/2211.01335