Abstract
Masked language modeling, widely used in discriminative language model (e.g., BERT) pretraining, commonly adopts a random masking strategy. However, random masking does not consider the importance of the different words in the sentence meaning, where some of them are more worthy to be predicted. Therefore, various masking strategies (e.g., entity-level masking) are proposed, but most of them require expensive prior knowledge and generally train from scratch without reusing existing model weights. In this paper, we present Self-Evolution learning (SE), a simple and effective token masking and learning method to fully and wisely exploit the knowledge from data. SE focuses on learning the informative yet under-explored tokens and adaptively regularizes the training by introducing a novel Token-specific Label Smoothing approach. Experiments on 10 tasks show that our SE brings consistent and significant improvements (+1.43~2.12 average scores) upon different PLMs. In-depth analyses demonstrate that SE improves linguistic knowledge learning and generalization.
Abstract (translated)
遮蔽语言建模(如BERT)预训练广泛使用遮蔽策略,通常采用随机遮蔽策略。然而,随机遮蔽并未考虑句子中不同单词在含义中的作用,其中一些单词更有价值被预测。因此,提出了各种遮蔽策略(如实体遮蔽),但大多数都需要昂贵的前置知识,并且通常从头开始训练,未使用现有模型权重。在本文中,我们介绍了自我演化学习(SE),一种简单而有效的元字符遮蔽和学习方法,充分利用数据中的知识。SE重点是学习那些未被充分探索的元字符,并自适应地规范化训练,通过引入独特的元字符特定标签平滑方法。对10个任务的实验表明,我们的SE在不同聚类任务中表现出一致性和显著改进,平均提高(1.43~2.12)得分。深入分析表明,SE改善了语言学知识学习和泛化。
URL
https://arxiv.org/abs/2305.15275