Abstract
Event cameras have recently gained significant traction since they open up new avenues for low-latency and low-power solutions to complex computer vision problems. To unlock these solutions, it is necessary to develop algorithms that can leverage the unique nature of event data. However, the current state-of-the-art is still highly influenced by the frame-based literature, and usually fails to deliver on these promises. In this work, we take this into consideration and propose a novel self-supervised learning pipeline for the sequential estimation of event-based optical flow that allows for the scaling of the models to high inference frequencies. At its core, we have a continuously-running stateful neural model that is trained using a novel formulation of contrast maximization that makes it robust to nonlinearities and varying statistics in the input events. Results across multiple datasets confirm the effectiveness of our method, which establishes a new state of the art in terms of accuracy for approaches trained or optimized without ground truth.
Abstract (translated)
Event cameras最近取得了显著进展,因为它们为解决复杂的计算机视觉问题提供了低延迟和低功耗的解决方案。要解锁这些解决方案,必须开发能够利用事件数据独特性质的算法。然而,当前的研究成果仍然受到帧率相关的文献的强烈影响,通常无法兑现这些承诺。在本研究中,我们考虑到了这一点,并提出了一种新的自我监督学习流程,用于事件based光学流的顺序估计,该流程可以使模型规模扩展到高推理频率。其核心是一个持续运行的有状态神经网络模型,使用ContrastMaximization的新 formulation进行训练,使其能够 robustly适应输入事件中的非线性和变化性。多个数据集的结果确认了我们方法的有效性,这建立了在没有基准实值训练或优化方法的情况下的准确性的新的前沿。
URL
https://arxiv.org/abs/2303.05214