Abstract
Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this paper, we study catastrophic forgetting, as well as methods to minimize this, in a massively multilingual continual learning framework involving up to 51 languages and covering both classification and sequence labeling tasks. We present LR ADJUST, a learning rate scheduling method that is simple, yet effective in preserving new information without strongly overwriting past knowledge. Furthermore, we show that this method is effective across multiple continual learning approaches. Finally, we provide further insights into the dynamics of catastrophic forgetting in this massively multilingual setup.
Abstract (translated)
真实的多语言系统应该能够高效地整合新的语言能力,随着输入系统的数据分布的演变和变化而不断发展。要做到这一点,系统需要处理灾难性遗忘的问题,即模型性能对过去的语言或任务下降。在本文中,我们研究了灾难性遗忘的问题,以及减少这一问题的方法,在一个涉及51种语言、涵盖分类和序列标签任务的大型多语言持续学习框架中。我们提出了LR调整,一种简单的学习率调度方法,能够在不强烈覆盖旧知识的情况下,有效地保留新信息。此外,我们证明,这种方法适用于多个持续学习方法。最后,我们提供了对这种大型多语言 setup 灾难性遗忘动态的更深入理解。
URL
https://arxiv.org/abs/2305.16252