Abstract
Collaborative perception has garnered considerable attention due to its capacity to address several inherent challenges in single-agent perception, including occlusion and out-of-range issues. However, existing collaborative perception systems heavily rely on precise localization systems to establish a consistent spatial coordinate system between agents. This reliance makes them susceptible to large pose errors or malicious attacks, resulting in substantial reductions in perception performance. To address this, we propose~$\mathtt{CoBEVGlue}$, a novel self-localized collaborative perception system, which achieves more holistic and robust collaboration without using an external localization system. The core of~$\mathtt{CoBEVGlue}$ is a novel spatial alignment module, which provides the relative poses between agents by effectively matching co-visible objects across agents. We validate our method on both real-world and simulated datasets. The results show that i) $\mathtt{CoBEVGlue}$ achieves state-of-the-art detection performance under arbitrary localization noises and attacks; and ii) the spatial alignment module can seamlessly integrate with a majority of previous methods, enhancing their performance by an average of $57.7\%$. Code is available at this https URL
Abstract (translated)
协作感知因为其解决单智能体感知中几个固有挑战的能力而备受关注,包括遮挡和距离问题。然而,现有的协作感知系统严重依赖精确的局部化系统来建立代理之间的一致空间坐标系。这种依赖使它们容易受到大型姿态误差或恶意攻击的影响,导致感知性能大幅下降。为了应对这个问题,我们提出了~$\mathtt{CoBEVGlue}$,一种新颖的自局部化协作感知系统,在没有使用外部局部化系统的情况下实现更全面和稳健的协作。~$\mathtt{CoBEVGlue}$的核心是一个新颖的空间对齐模块,通过有效地匹配共享可见物的相对姿态来提供代理之间的相对位置。我们在真实世界和模拟数据集上验证我们的方法。结果表明,i) ~$\mathtt{CoBEVGlue}$在任意局部化噪声和攻击下实现了最先进的检测性能;ii) 空间对齐模块可以无缝地与大多数以前的方法集成,通过平均增加57.7%的性能来提高它们的感知效果。代码可以从这个链接下载:
URL
https://arxiv.org/abs/2406.12712