Paper Reading AI Learner

PWOC-3D: Deep Occlusion-Aware End-to-End Scene Flow Estimation

2019-04-12 09:20:51
Rohan Saxena, René Schuster, Oliver Wasenmüller, Didier Stricker

Abstract

In the last few years, convolutional neural networks (CNNs) have demonstrated increasing success at learning many computer vision tasks including dense estimation problems such as optical flow and stereo matching. However, the joint prediction of these tasks, called scene flow, has traditionally been tackled using slow classical methods based on primitive assumptions which fail to generalize. The work presented in this paper overcomes these drawbacks efficiently (in terms of speed and accuracy) by proposing PWOC-3D, a compact CNN architecture to predict scene flow from stereo image sequences in an end-to-end supervised setting. Further, large motion and occlusions are well-known problems in scene flow estimation. PWOC-3D employs specialized design decisions to explicitly model these challenges. In this regard, we propose a novel self-supervised strategy to predict occlusions from images (learned without any labeled occlusion data). Leveraging several such constructs, our network achieves competitive results on the KITTI benchmark and the challenging FlyingThings3D dataset. Especially on KITTI, PWOC-3D achieves the second place among end-to-end deep learning methods with 48 times fewer parameters than the top-performing method.

Abstract (translated)

近年来,卷积神经网络(CNN)在许多计算机视觉任务的学习上取得了越来越大的成功,包括光学流和立体匹配等密集估计问题。然而,对这些任务的联合预测,称为场景流,传统上是采用基于原始假设的缓慢经典方法来解决的,这些原始假设无法推广。本文的工作通过提出一种紧凑的CNN结构pwoc-3d,在端到端监控环境中,从立体图像序列预测场景流,有效地克服了这些缺点(在速度和精度方面)。此外,在场景流估计中,大运动和闭塞是众所周知的问题。pwoc-3d采用专门的设计决策来明确模拟这些挑战。在这方面,我们提出了一种新的自我监督策略来预测图像的闭塞(学习时没有任何标记的闭塞数据)。利用几个这样的结构,我们的网络在Kitti基准测试和具有挑战性的Flyingthings3d数据集上取得了有竞争力的结果。尤其是在基蒂,pwoc-3d在端到端深度学习方法中排名第二,参数比顶级执行方法少48倍。

URL

https://arxiv.org/abs/1904.06116

PDF

https://arxiv.org/pdf/1904.06116.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot