Abstract
In the last few years, convolutional neural networks (CNNs) have demonstrated increasing success at learning many computer vision tasks including dense estimation problems such as optical flow and stereo matching. However, the joint prediction of these tasks, called scene flow, has traditionally been tackled using slow classical methods based on primitive assumptions which fail to generalize. The work presented in this paper overcomes these drawbacks efficiently (in terms of speed and accuracy) by proposing PWOC-3D, a compact CNN architecture to predict scene flow from stereo image sequences in an end-to-end supervised setting. Further, large motion and occlusions are well-known problems in scene flow estimation. PWOC-3D employs specialized design decisions to explicitly model these challenges. In this regard, we propose a novel self-supervised strategy to predict occlusions from images (learned without any labeled occlusion data). Leveraging several such constructs, our network achieves competitive results on the KITTI benchmark and the challenging FlyingThings3D dataset. Especially on KITTI, PWOC-3D achieves the second place among end-to-end deep learning methods with 48 times fewer parameters than the top-performing method.
Abstract (translated)
近年来,卷积神经网络(CNN)在许多计算机视觉任务的学习上取得了越来越大的成功,包括光学流和立体匹配等密集估计问题。然而,对这些任务的联合预测,称为场景流,传统上是采用基于原始假设的缓慢经典方法来解决的,这些原始假设无法推广。本文的工作通过提出一种紧凑的CNN结构pwoc-3d,在端到端监控环境中,从立体图像序列预测场景流,有效地克服了这些缺点(在速度和精度方面)。此外,在场景流估计中,大运动和闭塞是众所周知的问题。pwoc-3d采用专门的设计决策来明确模拟这些挑战。在这方面,我们提出了一种新的自我监督策略来预测图像的闭塞(学习时没有任何标记的闭塞数据)。利用几个这样的结构,我们的网络在Kitti基准测试和具有挑战性的Flyingthings3d数据集上取得了有竞争力的结果。尤其是在基蒂,pwoc-3d在端到端深度学习方法中排名第二,参数比顶级执行方法少48倍。
URL
https://arxiv.org/abs/1904.06116