Abstract
The problem of Scene flow estimation in depth videos has been attracting attention of researchers of robot vision, due to its potential application in various areas of robotics. The conventional scene flow methods are difficult to use in reallife applications due to their long computational overhead. We propose a conditional adversarial network SceneFlowGAN for scene flow estimation. The proposed SceneFlowGAN uses loss function at two ends: both generator and descriptor ends. The proposed network is the first attempt to estimate scene flow using generative adversarial networks, and is able to estimate both the optical flow and disparity from the input stereo images simultaneously. The proposed method is experimented on a large RGB-D benchmark sceneflow dataset.
Abstract (translated)
深度视频中的场景流估计问题由于其在机器人各个领域的潜在应用而引起机器人视觉研究者的关注。传统的场景流方法由于计算量大,在现实生活中很难应用。提出了一种条件对抗网络场景流估计方法。所提出的sceneflowgan在两个端都使用了损失函数:生成器端和描述符端。该网络是第一次尝试使用生成对抗网络来估计场景流,能够同时从输入立体图像中估计光流和视差。在一个大型的RGB-D基准场景流数据集上进行了实验。
URL
https://arxiv.org/abs/1904.11163