Paper Reading AI Learner

Visual Representation Learning from Unlabeled Video using Contrastive Masked Autoencoders


Abstract

Masked Autoencoders (MAEs) learn self-supervised representations by randomly masking input image patches and a reconstruction loss. Alternatively, contrastive learning self-supervised methods encourage two versions of the same input to have a similar representation, while pulling apart the representations for different inputs. We propose ViC-MAE, a general method that combines both MAE and contrastive learning by pooling the local feature representations learned under the MAE reconstruction objective and leveraging this global representation under a contrastive objective across video frames. We show that visual representations learned under ViC-MAE generalize well to both video classification and image classification tasks. Using a backbone ViT-B/16 network pre-trained on the Moments in Time (MiT) dataset, we obtain state-of-the-art transfer learning from video to images on Imagenet-1k by improving 1.58% in absolute top-1 accuracy from a recent previous work. Moreover, our method maintains a competitive transfer-learning performance of 81.50% top-1 accuracy on the Kinetics-400 video classification benchmark. In addition, we show that despite its simplicity, ViC-MAE yields improved results compared to combining MAE pre-training with previously proposed contrastive objectives such as VicReg and SiamSiam.

Abstract (translated)

遮蔽自动编码器(MAEs)通过随机遮蔽输入图像点和重构损失学习自监督表示。Alternatively,比较性学习自监督方法鼓励相同的输入版本具有相似的表示,同时分离不同的输入版本的表示。我们提出了ViC-MAE,一种通用方法,将MAEs和比较性学习相结合,通过汇总在MAEs重构目标下学习的小特征表示并利用跨视频帧的比较目标上的优势,获得视频分类和图像分类任务的最新研究成果。通过在时间序列数据(MiT)数据集上预先训练的ViT-B/16网络,我们在Imagenet-1k上从视频到图像的迁移学习中获得最先进的结果,从最近的一项工作提高了1.58%的绝对准确率。此外,我们的方法在Kinetics-400视频分类基准上保持了具有竞争力的迁移学习性能,保持了81.50%的top-1准确率。此外,我们表明,尽管ViC-MAE的简单易用,但它比结合MAEs预训练与以前提出的比较性目标,如vicReg和SiamSiam等方法获得更好的结果。

URL

https://arxiv.org/abs/2303.12001

PDF

https://arxiv.org/pdf/2303.12001.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot