Abstract
Current state-of-the-art approaches to summarization utilize large pre-trained Transformer models. Distilling these models to smaller student models has become critically important for practical use; however there are many different distillation methods in the NLP literature. Recent work on distilling BERT for classification and regression tasks shows strong performance using standard knowledge distillation. Alternatively, machine translation practitioners, have primarily distilled using pseudo labeling, where a small model is trained on the translations of a larger model. A third approach is to 'shrink and fine-tune' (SFT), which avoids any explicit distillation by transferring parameters to a student model and then fine-tuning. This work considers distillation of BART and Pegasus, two state of the art summarization models, on two datasets across a variety of student models. We produce high quality, fast checkpoints across different computational budgets, and learn some patterns about which distillation techniques perform well in which situations. PyTorch code to rerun our methods, and use the distilled BART and Pegasus checkpoints is available in Hugging Face transformers.
Abstract (translated)
URL
https://arxiv.org/abs/2010.13002