Abstract
Object detection tasks, crucial in safety-critical systems like autonomous driving, focus on pinpointing object locations. These detectors are known to be susceptible to backdoor attacks. However, existing backdoor techniques have primarily been adapted from classification tasks, overlooking deeper vulnerabilities specific to object detection. This paper is dedicated to bridging this gap by introducing Detector Collapse} (DC), a brand-new backdoor attack paradigm tailored for object detection. DC is designed to instantly incapacitate detectors (i.e., severely impairing detector's performance and culminating in a denial-of-service). To this end, we develop two innovative attack schemes: Sponge for triggering widespread misidentifications and Blinding for rendering objects invisible. Remarkably, we introduce a novel poisoning strategy exploiting natural objects, enabling DC to act as a practical backdoor in real-world environments. Our experiments on different detectors across several benchmarks show a significant improvement ($\sim$10\%-60\% absolute and $\sim$2-7$\times$ relative) in attack efficacy over state-of-the-art attacks.
Abstract (translated)
翻译: 目标检测任务,在例如自动驾驶等关键安全系统上,关键在于精确确定物体位置。这些检测器已知容易受到后门攻击。然而,现有的后门技术主要来自分类任务,忽视了针对目标检测的更深的漏洞。本文致力于弥补这一差距,通过引入Detector Collapse)(DC),一种全新的针对目标检测的后门攻击范例。DC旨在立即使检测器失效(即严重削弱检测器的性能,导致拒绝服务)。为此,我们开发了两种创新攻击方案:Sponge用于引发广泛的误识别,Blinding用于使物体不可见。值得注意的是,我们利用自然物体引入了一种新的投毒策略,使DC在现实环境充当实际后门。我们在多个基准测试上的实验结果表明,与最先进的攻击相比,攻击效果显著提高(绝对和相对攻击效果分别约为10% - 60%)。
URL
https://arxiv.org/abs/2404.11357