Abstract
Underwater imaging is fundamentally challenging due to wavelength-dependent light attenuation, strong scattering from suspended particles, turbidity-induced blur, and non-uniform illumination. These effects impair standard cameras and make ground-truth motion nearly impossible to obtain. On the other hand, event cameras offer microsecond resolution and high dynamic range. Nonetheless, progress on investigating event cameras for underwater environments has been limited due to the lack of datasets that pair realistic underwater optics with accurate optical flow. To address this problem, we introduce the first synthetic underwater benchmark dataset for event-based optical flow derived from physically-based ray-traced RGBD sequences. Using a modern video-to-event pipeline applied to rendered underwater videos, we produce realistic event data streams with dense ground-truth flow, depth, and camera motion. Moreover, we benchmark state-of-the-art learning-based and model-based optical flow prediction methods to understand how underwater light transport affects event formation and motion estimation accuracy. Our dataset establishes a new baseline for future development and evaluation of underwater event-based perception algorithms. The source code and dataset for this project are publicly available at this https URL.
Abstract (translated)
由于光的波长依赖性衰减、悬浮颗粒引起的强烈散射、浑浊导致的模糊以及非均匀照明,水下成像本质上具有挑战性。这些效应影响了标准相机的功能,并使获取地面实况运动变得几乎不可能。另一方面,事件相机提供微秒级分辨率和高动态范围。然而,由于缺乏将逼真的水下光学与准确光流相结合的数据集,对事件相机在水下环境中应用的研究进展受到限制。 为了解决这个问题,我们引入了首个基于物理基础的光线追踪RGBD序列生成的合成水下基准数据集,用于事件驱动光流。通过现代视频到事件管道应用于渲染出的水下视频中,我们产生了具有密集的真实地面实况流动、深度和摄像机运动的数据流。 此外,我们对最先进的学习型和模型驱动的光流预测方法进行了基准测试,以了解水下光线传输如何影响事件形成及运动估计精度。我们的数据集为未来水下事件感知算法的发展与评估建立了新的基线标准。该项目的源代码和数据集在以下网址公开提供:[此链接](请将 [此链接] 替换实际的URL)。
URL
https://arxiv.org/abs/2601.10054