Abstract
In Virtual Reality (VR), adversarial attack remains a significant security threat. Most deep learning-based methods for physical and digital adversarial attacks focus on enhancing attack performance by crafting adversarial examples that contain large printable distortions that are easy for human observers to identify. However, attackers rarely impose limitations on the naturalness and comfort of the appearance of the generated attack image, resulting in a noticeable and unnatural attack. To address this challenge, we propose a framework to incorporate style transfer to craft adversarial inputs of natural styles that exhibit minimal detectability and maximum natural appearance, while maintaining superior attack capabilities.
Abstract (translated)
在虚拟现实(VR)中,对抗性攻击仍然是一个重要的安全威胁。大多数基于深度学习的物理和数字对抗性攻击方法都集中精力通过构建包含大量可打印的变形实例的对抗性示例来提高攻击性能。然而,攻击者很少对生成攻击图像的自然性和舒适性施加限制,导致了一种明显的不自然且不可见的攻击。为了应对这个挑战,我们提出了一个框架,将风格迁移应用于自然风格的数据,以生成具有最小检测性和最大自然外观的攻击输入,同时保持卓越的攻击能力。
URL
https://arxiv.org/abs/2403.14778