Paper Reading AI Learner

Delving Deeper into Data Scaling in Masked Image Modeling

2023-05-24 15:33:46
Cheng-Ze Lu, Xiaojie Jin, Qibin Hou, Jun Hao Liew, Ming-Ming Cheng, Jiashi Feng

Abstract

Understanding whether self-supervised learning methods can scale with unlimited data is crucial for training large-scale models. In this work, we conduct an empirical study on the scaling capability of masked image modeling (MIM) methods (e.g., MAE) for visual recognition. Unlike most previous works that depend on the widely-used ImageNet dataset, which is manually curated and object-centric, we take a step further and propose to investigate this problem in a more practical setting. Specifically, we utilize the web-collected Coyo-700M dataset. We randomly sample varying numbers of training images from the Coyo dataset and construct a series of sub-datasets, containing 0.5M, 1M, 5M, 10M, and 100M images, for pre-training. Our goal is to investigate how the performance changes on downstream tasks when scaling with different sizes of data and models. The study reveals that: 1) MIM can be viewed as an effective method to improve the model capacity when the scale of the training data is relatively small; 2) Strong reconstruction targets can endow the models with increased capacities on downstream tasks; 3) MIM pre-training is data-agnostic under most scenarios, which means that the strategy of sampling pre-training data is non-critical. We hope these observations could provide valuable insights for future research on MIM.

Abstract (translated)

理解无监督学习方法是否能够以无限数据规模扩展对于训练大规模模型至关重要。在本研究中,我们进行了一项实证研究,研究掩膜图像建模方法(例如MAE)在视觉识别任务中的图像扩展能力。与大多数先前工作依赖于广泛使用的ImageNet数据集,该数据集是由手动审核和对象为中心的不同,我们更进一步,建议研究这个问题更为实际的情况。具体而言,我们利用收集在互联网上的Coyo-700M数据集。我们随机从Coyo数据集中随机抽样一定数量的训练图像,并构建了一系列子数据集,其中包括0.5M、1M、5M、10M和100M图像,用于预处理。我们的目标是研究在不同数据量和模型大小下,后续任务的性能变化。研究表明:1) MIM可以在相对较小的训练数据规模下被视为提高模型能力的有效方法;2) 强大的重构目标可以赋予模型更大的后续任务能力;3) MIM预处理在大多数情况中数据无关,这意味着采样预处理数据的策略不是关键。我们希望这些观察可以为未来的MIM研究提供有价值的见解。

URL

https://arxiv.org/abs/2305.15248

PDF

https://arxiv.org/pdf/2305.15248.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot