Paper Reading AI Learner

Self-Evolution Learning for Discriminative Language Model Pretraining

2023-05-24 16:00:54
Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao

Abstract

Masked language modeling, widely used in discriminative language model (e.g., BERT) pretraining, commonly adopts a random masking strategy. However, random masking does not consider the importance of the different words in the sentence meaning, where some of them are more worthy to be predicted. Therefore, various masking strategies (e.g., entity-level masking) are proposed, but most of them require expensive prior knowledge and generally train from scratch without reusing existing model weights. In this paper, we present Self-Evolution learning (SE), a simple and effective token masking and learning method to fully and wisely exploit the knowledge from data. SE focuses on learning the informative yet under-explored tokens and adaptively regularizes the training by introducing a novel Token-specific Label Smoothing approach. Experiments on 10 tasks show that our SE brings consistent and significant improvements (+1.43~2.12 average scores) upon different PLMs. In-depth analyses demonstrate that SE improves linguistic knowledge learning and generalization.

Abstract (translated)

遮蔽语言建模(如BERT)预训练广泛使用遮蔽策略,通常采用随机遮蔽策略。然而,随机遮蔽并未考虑句子中不同单词在含义中的作用,其中一些单词更有价值被预测。因此,提出了各种遮蔽策略(如实体遮蔽),但大多数都需要昂贵的前置知识,并且通常从头开始训练,未使用现有模型权重。在本文中,我们介绍了自我演化学习(SE),一种简单而有效的元字符遮蔽和学习方法,充分利用数据中的知识。SE重点是学习那些未被充分探索的元字符,并自适应地规范化训练,通过引入独特的元字符特定标签平滑方法。对10个任务的实验表明,我们的SE在不同聚类任务中表现出一致性和显著改进,平均提高(1.43~2.12)得分。深入分析表明,SE改善了语言学知识学习和泛化。

URL

https://arxiv.org/abs/2305.15275

PDF

https://arxiv.org/pdf/2305.15275.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot