Abstract
The pretraining-fine-tuning paradigm has been the de facto strategy for transfer learning in modern language modeling. With the understanding that task adaptation in LMs is often a function of parameters shared across tasks, we argue that a more surgical approach to regularization needs to exist for smoother transfer learning. Towards this end, we investigate how the pretraining loss landscape is affected by these task-sensitive parameters through an information-theoretic lens. We then leverage the findings from our investigations to devise a novel approach to dropout for improved model regularization and better downstream generalization. This approach, named guided dropout, is both task & architecture agnostic and adds no computational overhead to the fine-tuning process. Through empirical evaluations, we showcase that our approach to regularization yields consistently better performance, even in scenarios of data paucity, compared to standardized baselines.
Abstract (translated)
预训练-微调范式一直是现代自然语言处理中迁移学习的主要策略。鉴于任务适应性在自然语言处理模型中通常取决于跨任务共享的参数,我们认为需要存在一种更精细的规范化方法来实现更平滑的迁移学习。为此,我们通过信息论的视角研究这些任务相关的参数如何影响预训练损失函数地形。然后,我们利用这些调查结果来设计一种名为引导 dropout 的新方法,以改善模型规范化并提高下游的泛化能力。这种方法既不依赖于任务,也不依赖于架构,并且不会对微调过程增加任何计算开销。通过实证评估,我们展示了我们的规范化方法在数据稀疏情况下始终能够产生更快的性能,与标准化的基线相比。
URL
https://arxiv.org/abs/2406.14005