Abstract
Multiple robots could perceive a scene (e.g., detect objects) collaboratively better than individuals, although easily suffer from adversarial attacks when using deep learning. This could be addressed by the adversarial defense, but its training requires the often-unknown attacking mechanism. Differently, we propose ROBOSAC, a novel sampling-based defense strategy generalizable to unseen attackers. Our key idea is that collaborative perception should lead to consensus rather than dissensus in results compared to individual perception. This leads to our hypothesize-and-verify framework: perception results with and without collaboration from a random subset of teammates are compared until reaching a consensus. In such a framework, more teammates in the sampled subset often entail better perception performance but require longer sampling time to reject potential attackers. Thus, we derive how many sampling trials are needed to ensure the desired size of an attacker-free subset, or equivalently, the maximum size of such a subset that we can successfully sample within a given number of trials. We validate our method on the task of collaborative 3D object detection in autonomous driving scenarios.
Abstract (translated)
多个机器人可能比单个机器人更协作地感知场景(例如,检测物体),但在使用深度学习时容易受到dversarial攻击。这可以通过dversarial防御来解决,但训练需要未知的攻击机制。我们提出了ROBOSAC,一种基于采样的新防御策略,可以适用于未知的攻击者。我们的关键思想是,协作感知应该导致的共识而不是个体感知的结果。这导致了我们的假设-验证框架:从随机的部分队友中不协作地获取感知结果,并与协作的感知结果进行比较,直到达到共识。在这种框架中,采样 subset中更多的队友通常会导致更好的感知性能,但需要更长的采样时间来拒绝潜在的攻击者。因此,我们推导了确保目标群体大小(即成功样本化的数量)的最小采样次数。我们还验证了我们的方法在自主驾驶场景下的合作三维物体检测任务中的有效性。
URL
https://arxiv.org/abs/2303.09495