Abstract
In deep learning applications, robustness measures the ability of neural models that handle slight changes in input data, which could lead to potential safety hazards, especially in safety-critical applications. Pre-deployment assessment of model robustness is essential, but existing methods often suffer from either high costs or imprecise results. To enhance safety in real-world scenarios, metrics that effectively capture the model's robustness are needed. To address this issue, we compare the rigour and usage conditions of various assessment methods based on different definitions. Then, we propose a straightforward and practical metric utilizing hypothesis testing for probabilistic robustness and have integrated it into the TorchAttacks library. Through a comparative analysis of diverse robustness assessment methods, our approach contributes to a deeper understanding of model robustness in safety-critical applications.
Abstract (translated)
在深度学习应用中,稳健性测量处理输入数据微小变化的能力,可能导致潜在的安全风险,特别是在关键安全应用中。对模型稳健性的预部署评估至关重要,但现有方法通常存在成本高或结果不精确的问题。为了提高现实场景中的安全性,需要有效的指标来捕捉模型的稳健性。为了解决这个问题,我们根据不同的定义比较了各种评估方法的严谨性和使用条件。然后,我们提出了一个简单而实际的概率鲁棒性指标,并将其集成到TorchAttacks库中。通过比较不同鲁棒性评估方法的比较分析,我们的方法为关键安全应用中模型的稳健性提供了更深入的理解。
URL
https://arxiv.org/abs/2404.16457