Abstract
Non-adversarial robustness, also known as natural robustness, is a property of deep learning models that enables them to maintain performance even when faced with distribution shifts caused by natural variations in data. However, achieving this property is challenging because it is difficult to predict in advance the types of distribution shifts that may occur. To address this challenge, researchers have proposed various approaches, some of which anticipate potential distribution shifts, while others utilize knowledge about the shifts that have already occurred to enhance model generalizability. In this paper, we present a brief overview of the most recent techniques for improving the robustness of computer vision methods, as well as a summary of commonly used robustness benchmark datasets for evaluating the model's performance under data distribution shifts. Finally, we examine the strengths and limitations of the approaches reviewed and identify general trends in deep learning robustness improvement for computer vision.
Abstract (translated)
非自适应鲁棒性(也被称为自然鲁棒性)是深度学习模型的一种属性,使其能够在面临数据自然变异的情况下保持性能。然而,实现这种属性是挑战性的,因为难以在 advance 上预测可能发生的分布变异类型。为了应对这种挑战,研究人员提出了各种方法,其中一些方法能够预见潜在的分布变异,而另一些方法则利用已经发生的变异知识来提高模型的泛化能力。在本文中,我们将简要介绍最近用于提高计算机视觉方法鲁棒性的技术,并摘要介绍常用的鲁棒性基准数据集,用于评估模型在数据分布变异下的性能。最后,我们将审查所综述的方法的优点和局限性,并识别深度学习计算机视觉鲁棒性改进的一般趋势。
URL
https://arxiv.org/abs/2305.14986