Paper Reading AI Learner

Kernel-Free Image Deblurring with a Pair of Blurred/Noisy Images

2019-03-26 03:47:03
Chunzhi Gu, Xuequan Lu, Ying He, Chao Zhang

Abstract

Complex blur like the mixup of space-variant and space-invariant blur, which is hard to be modeled mathematically, widely exists in real images. In the real world, a common type of blur occurs when capturing images in low-light environments. In this paper, we propose a novel image deblurring method that does not need to estimate blur kernels. We utilize a pair of images which can be easily acquired in low-light situations: (1) a blurred image taken with low shutter speed and low ISO noise, and (2) a noisy image captured with high shutter speed and high ISO noise. Specifically, the blurred image is first sliced into patches, and we extend the Gaussian mixture model (GMM) to model the underlying intensity distribution of each patch using the corresponding patches in the noisy image. We compute patch correspondences by analyzing the optical flow between the two images. The Expectation-Maximization (EM) algorithm is utilized to estimate the involved parameters in the GMM. To preserve sharp features, we add an additional bilateral term to the objective function in the M-step. We eventually add a detail layer to the deblurred image for refinement. Extensive experiments on both synthetic and real-world data demonstrate that our method outperforms state-of-the-art techniques, in terms of robustness, visual quality and quantitative metrics. We will make our dataset and source code publicly available.

Abstract (translated)

复杂模糊,如空间变量和空间不变模糊的混合,难以数学建模,在实际图像中广泛存在。在现实世界中,在低光环境中拍摄图像时会出现一种常见的模糊。本文提出了一种新的图像去模糊方法,不需要对模糊核进行估计。我们利用一对在低光照条件下容易获得的图像:(1)以低快门速度和低ISO噪声拍摄的模糊图像;(2)以高快门速度和高ISO噪声拍摄的噪声图像。具体地说,模糊图像首先被分割成块,然后我们扩展了高斯混合模型(GMM),利用噪声图像中相应的块来模拟每个块的潜在强度分布。我们通过分析两幅图像之间的光流来计算光斑对应关系。利用期望最大化算法对GMM中涉及的参数进行估计。为了保持锐利的特征,我们在m-step的目标函数中添加了一个额外的双边项。我们最终会在去模糊的图像上添加一个细节层来进行细化。对合成数据和真实数据的大量实验表明,我们的方法在鲁棒性、视觉质量和定量指标方面优于最先进的技术。我们将公开我们的数据集和源代码。

URL

https://arxiv.org/abs/1903.10667

PDF

https://arxiv.org/pdf/1903.10667.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot