Paper Reading AI Learner

EDVR: Video Restoration with Enhanced Deformable Convolutional Networks

2019-05-07 17:58:14
Xintao Wang, Kelvin C.K. Chan, Ke Yu, Chao Dong, Chen Change Loy

Abstract

Video restoration tasks, including super-resolution, deblurring, etc, are drawing increasing attention in the computer vision community. A challenging benchmark named REDS is released in the NTIRE19 Challenge. This new benchmark challenges existing methods from two aspects: (1) how to align multiple frames given large motions, and (2) how to effectively fuse different frames with diverse motion and blur. In this work, we propose a novel Video Restoration framework with Enhanced Deformable networks, termed EDVR, to address these challenges. First, to handle large motions, we devise a Pyramid, Cascading and Deformable (PCD) alignment module, in which frame alignment is done at the feature level using deformable convolutions in a coarse-to-fine manner. Second, we propose a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration. Thanks to these modules, our EDVR wins the champions and outperforms the second place by a large margin in all four tracks in the NTIRE19 video restoration and enhancement challenges. EDVR also demonstrates superior performance to state-of-the-art published methods on video super-resolution and deblurring. The code is available at https://github.com/xinntao/EDVR.

Abstract (translated)

视频恢复任务,包括超分辨率、去模糊等,越来越受到计算机视觉界的关注。在ntire19挑战中发布了一个名为Reds的具有挑战性的基准测试。这项新的基准测试从两个方面挑战了现有的方法:(1)如何在大运动下对齐多个帧;(2)如何有效地融合不同运动和模糊的帧。在这项工作中,我们提出了一个新的视频恢复框架与增强变形网络,称为EDVR,以解决这些挑战。首先,为了处理大的运动,我们设计了一个金字塔、级联和可变形(PCD)对齐模块,在这个模块中,帧对齐是在特征级别使用可变形卷积以粗到细的方式完成的。第二,我们提出了一个时空注意融合模块,它将注意力从时间和空间两个方面加以应用,以强调后续恢复的重要特征。得益于这些模块,我们的EDVR在NTIR19视频恢复和增强挑战中赢得了冠军,并在所有四个曲目中以巨大的优势超越了第二名。EDVR在视频超分辨率和去模糊方面的表现也优于最先进的公开方法。该代码可在https://github.com/xinnato/edvr上找到。

URL

https://arxiv.org/abs/1905.02716

PDF

https://arxiv.org/pdf/1905.02716.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot