Abstract
Rapid advancements of deep learning are accelerating adoption in a wide variety of applications, including safety-critical applications such as self-driving vehicles, drones, robots, and surveillance systems. These advancements include applying variations of sophisticated techniques that improve the performance of models. However, such models are not immune to adversarial manipulations, which can cause the system to misbehave and remain unnoticed by experts. The frequency of modifications to existing deep learning models necessitates thorough analysis to determine the impact on models' robustness. In this work, we present an experimental evaluation of the effects of model modifications on deep learning model robustness using adversarial attacks. Our methodology involves examining the robustness of variations of models against various adversarial attacks. By conducting our experiments, we aim to shed light on the critical issue of maintaining the reliability and safety of deep learning models in safety- and security-critical applications. Our results indicate the pressing demand for an in-depth assessment of the effects of model changes on the robustness of models.
Abstract (translated)
深度学习的快速发展在各种应用中加速了其采用,包括自动驾驶车辆、无人机、机器人和监控系统等安全关键应用。这些进步包括应用复杂的技巧来提高模型的性能。然而,这些模型并非免受对抗性操纵的影响,这可能导致系统表现异常,并让专家无法察觉。对现有深度学习模型的修改频率表明,需要对模型的一致性进行深入分析,以确定其对模型鲁棒性的影响。在这项工作中,我们通过使用对抗攻击来评估模型修改对深度学习模型鲁棒性的影响。我们的方法包括研究模型修改对各种对抗攻击的鲁棒性。通过进行我们的实验,我们希望阐明在安全性和安全性关键应用中保持深度学习模型可靠性和安全性的迫切需求。我们的结果表明,对模型更改对模型鲁棒性的影响进行深入评估的需求非常紧迫。
URL
https://arxiv.org/abs/2405.01934