Abstract
A machine learning model that generalizes well should obtain low errors on unseen test examples. Thus, if we learn an optimal model in training data, it could have better generalization performance in testing tasks. However, learning such a model is not possible in standard machine learning frameworks as the distribution of the test data is unknown. To tackle this challenge, we propose a novel robust meta-learning method, which is more robust to the image-based testing tasks which is unknown and has distribution shifts with training tasks. Our robust meta-learning method can provide robust optimal models even when data from each distribution are scarce. In experiments, we demonstrate that our algorithm not only has better generalization performance but also robust to different unknown testing tasks.
Abstract (translated)
一个泛化良好的机器学习模型应该在未训练的测试例子上表现出较低的错误率。因此,如果我们在训练数据中学习最优模型,它可能在测试任务中表现出更好的泛化性能。然而,在标准机器学习框架中学习这样的模型是不可能的,因为测试数据的分布未知。为了解决这个挑战,我们提出了一种 novel 的鲁棒meta-learning方法,该方法更加鲁棒地对待未知的图像测试任务,并且与训练任务分布的变化具有更强的鲁棒性。我们的鲁棒meta-learning方法即使在每个分布的数据非常稀缺的情况下,也能提供鲁棒的最优模型。在实验中,我们证明了我们的算法不仅表现出更好的泛化性能,而且对不同类型的未知测试任务也具有更强的鲁棒性。
URL
https://arxiv.org/abs/2301.12698