Paper Reading AI Learner

Explicitly Modeling Generality into Self-Supervised Learning

2024-05-02 07:15:23
Jingyao Wang, Wenwen Qiang, Changwen Zheng

Abstract

The goal of generality in machine learning is to achieve excellent performance on various unseen tasks and domains. Recently, self-supervised learning (SSL) has been regarded as an effective method to achieve this goal. It can learn high-quality representations from unlabeled data and achieve promising empirical performance on multiple downstream tasks. Existing SSL methods mainly constrain generality from two aspects: (i) large-scale training data, and (ii) learning task-level shared knowledge. However, these methods lack explicit modeling of the SSL generality in the learning objective, and the theoretical understanding of SSL's generality remains limited. This may cause SSL models to overfit in data-scarce situations and generalize poorly in the real world, making it difficult to achieve true generality. To address these issues, we provide a theoretical definition of generality in SSL and define a $\sigma$-measurement to help quantify it. Based on this insight, we explicitly model generality into self-supervised learning and further propose a novel SSL framework, called GeSSL. It introduces a self-motivated target based on $\sigma$-measurement, which enables the model to find the optimal update direction towards generality. Extensive theoretical and empirical evaluations demonstrate the superior performance of the proposed GeSSL.

Abstract (translated)

泛化在机器学习中的目标是实现对各种未见任务和领域的卓越性能。近年来,自监督学习(SSL)被认为是实现这一目标的有效方法。它可以从未标记数据中学习高质量表示,并在多个下游任务上实现鼓舞人心的实证性能。现有的SSL方法主要从两个方面约束泛化:(i)大规模训练数据,(ii)学习任务级别共享知识。然而,这些方法在学习目标中没有明确建模SSL的泛化,而SSL的泛化理论理解仍然有限。这可能导致在数据稀疏情况下,SSL模型过拟合,并且在现实世界中表现不佳,使得实现真正的泛化变得困难。为了解决这些问题,我们提供了SSL中泛化的理论定义,并定义了一个$\sigma$度量来帮助度量它。基于这个洞见,我们明确地将泛化建模到自监督学习之中,并进一步提出了名为GeSSL的新SSL框架。它引入了一个基于$\sigma$度量的自激励目标,使模型能够找到向泛化最优更新方向的优化方向。大量的理论化和实证评价证明了所提出的GeSSL具有卓越的性能。

URL

https://arxiv.org/abs/2405.01053

PDF

https://arxiv.org/pdf/2405.01053.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot