Abstract
Modern deep learning techniques have illustrated their excellent capabilities in many areas, but relies on large training data. Optimization-based meta-learning train a model on a variety tasks, such that it can solve new learning tasks using only a small number of training samples.However, these methods assumes that training and test dataare identically and independently distributed. To overcome such limitation, in this paper, we propose invariant meta learning for out-of-distribution tasks. Specifically, invariant meta learning find invariant optimal meta-initialization,and fast adapt to out-of-distribution tasks with regularization penalty. Extensive experiments demonstrate the effectiveness of our proposed invariant meta learning on out-of-distribution few-shot tasks.
Abstract (translated)
现代深度学习技术在许多领域中展现了出色的能力,但需要大型训练数据。基于优化的元学习训练模型完成多种任务,使其可以使用少量的训练样本解决新的学习任务。然而,这些方法假设训练和测试数据是相同的且独立分布。为了克服这种限制,在本文中,我们提出了非均匀分布任务的不变元学习。具体来说,不变元学习找到不变最优元初始化,并快速适应非均匀分布任务,加入正则化惩罚。广泛的实验证明了我们提出的不变元学习在非均匀分布的少量任务上的有效性。
URL
https://arxiv.org/abs/2301.11779