Abstract
Latency attacks against object detection represent a variant of adversarial attacks that aim to inflate the inference time by generating additional ghost objects in a target image. However, generating ghost objects in the black-box scenario remains a challenge since information about these unqualified objects remains opaque. In this study, we demonstrate the feasibility of generating ghost objects in adversarial examples by extending the concept of "steal now, decrypt later" attacks. These adversarial examples, once produced, can be employed to exploit potential vulnerabilities in the AI service, giving rise to significant security concerns. The experimental results demonstrate that the proposed attack achieves successful attacks across various commonly used models and Google Vision API without any prior knowledge about the target model. Additionally, the average cost of each attack is less than \$ 1 dollars, posing a significant threat to AI security.
Abstract (translated)
延迟攻击针对目标检测是一种旨在通过在目标图像中生成额外幽灵对象来增加推理时间的对抗性攻击。然而,在黑盒场景中生成幽灵对象仍然是一个挑战,因为关于这些不合格对象的更多信息仍然是不可见的。在这项研究中,我们通过扩展“偷个不停,解密 later”攻击的概念,证明了在对抗性例子中生成幽灵对象是可能的。这些攻击性例子在生产后可以用于利用人工智能服务中的潜在漏洞,导致严重的安全问题。实验结果表明,与目标模型无关,所提出的攻击在各种常用模型和 Google Vision API 上都实现了成功的攻击。此外,每种攻击的平均成本不到 1 美元,对人工智能安全构成了重大威胁。
URL
https://arxiv.org/abs/2404.15881