Paper Reading AI Learner

Models Matter, So Does Training: An Empirical Study of CNNs for Optical Flow Estimation

2018-09-14 20:27:49
Deqing Sun, Xiaodong Yang, Ming-Yu Liu, Jan Kautz

Abstract

We investigate two crucial and closely related aspects of CNNs for optical flow estimation: models and training. First, we design a compact but effective CNN model, called PWC-Net, according to simple and well-established principles: pyramidal processing, warping, and cost volume processing. PWC-Net is 17 times smaller in size, 2 times faster in inference, and 11\% more accurate on Sintel final than the recent FlowNet2 model. It is the winning entry in the optical flow competition of the robust vision challenge. Next, we experimentally analyze the sources of our performance gains. In particular, we use the same training procedure of PWC-Net to retrain FlowNetC, a sub-network of FlowNet2. The retrained FlowNetC is 56\% more accurate on Sintel final than the previously trained one and even 5\% more accurate than the FlowNet2 model. We further improve the training procedure and increase the accuracy of PWC-Net on Sintel by 10\% and on KITTI 2012 and 2015 by 20\%. Our newly trained model parameters and training protocols will be available on https://github.com/NVlabs/PWC-Net

Abstract (translated)

我们研究了CNN用于光流估计的两个关键且密切相关的方面:模型和训练。首先,我们根据简单和完善的原则设计了一种紧凑但有效的CNN模型,称为PWC-Net:金字塔形处理,翘曲和成本量处理。与最近的FlowNet2型号相比,PWC-Net的尺寸缩小了17倍,推理速度提高了2倍,而Sintel的精确度则提高了11%。它是强大视觉挑战的光流竞赛的成功入口。接下来,我们通过实验分析我们的绩效收益来源。特别是,我们使用PWC-Net的相同培训程序来重新培训FlowNetC,这是FlowNet2的子网络。重新训练的FlowNetC在Sintel final上比之前训练的更精确56%,甚至比FlowNet2模型准确度高5%。我们进一步完善培训程序,将Sintel上PWC-Net的准确率提高10%,将KITTI 2012和2015提高20%。我们新近训练的模型参数和培训协议将在https://github.com/NVlabs/PWC-Net上提供

URL

https://arxiv.org/abs/1809.05571

PDF

https://arxiv.org/pdf/1809.05571.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot