Abstract
Event cameras are novel bio-inspired sensors that capture motion dynamics with much higher temporal resolution than traditional cameras, since pixels react asynchronously to brightness changes. They are therefore better suited for tasks involving motion such as motion segmentation. However, training event-based networks still represents a difficult challenge, as obtaining ground truth is very expensive, error-prone and limited in frequency. In this article, we introduce EV-LayerSegNet, a self-supervised CNN for event-based motion segmentation. Inspired by a layered representation of the scene dynamics, we show that it is possible to learn affine optical flow and segmentation masks separately, and use them to deblur the input events. The deblurring quality is then measured and used as self-supervised learning loss. We train and test the network on a simulated dataset with only affine motion, achieving IoU and detection rate up to 71% and 87% respectively.
Abstract (translated)
事件相机是一种新型的仿生传感器,与传统相机相比,它们能够以更高的时间分辨率捕捉运动动态,因为像素会异步地响应亮度变化。因此,对于涉及运动的任务(如运动分割)而言,事件相机更为适用。然而,训练基于事件的网络仍然是一项具有挑战性的任务,因为获取真实标签非常昂贵、容易出错且频率受限。在本文中,我们介绍了EV-LayerSegNet,这是一种自监督CNN,用于基于事件的运动分割。受场景动态分层表示的启发,我们展示了可以单独学习仿射光流和分割掩码,并使用它们来去除输入事件中的模糊。然后测量去模糊的质量并将其用作自监督学习损失。我们在仅包含仿射运动的模拟数据集上训练和测试网络,分别实现了高达71%的IoU和87%的检测率。
URL
https://arxiv.org/abs/2506.06596