Paper Reading AI Learner

Aesthetic-Driven Image Enhancement by Adversarial Learning

2018-07-02 06:25:18
Yubin Deng, Chen Change Loy, Xiaoou Tang

Abstract

We introduce EnhanceGAN, an adversarial learning based model that performs automatic image enhancement. Traditional image enhancement frameworks typically involve training models in a fully-supervised manner, which require expensive annotations in the form of aligned image pairs. In contrast to these approaches, our proposed EnhanceGAN only requires weak supervision (binary labels on image aesthetic quality) and is able to learn enhancement operators for the task of aesthetic-based image enhancement. In particular, we show the effectiveness of a piecewise color enhancement module trained with weak supervision, and extend the proposed EnhanceGAN framework to learning a deep filtering-based aesthetic enhancer. The full differentiability of our image enhancement operators enables the training of EnhanceGAN in an end-to-end manner. We further demonstrate the capability of EnhanceGAN in learning aesthetic-based image cropping without any groundtruth cropping pairs. Our weakly-supervised EnhanceGAN reports competitive quantitative results on aesthetic-based color enhancement as well as automatic image cropping, and a user study confirms that our image enhancement results are on par with or even preferred over professional enhancement.

Abstract (translated)

我们介绍了EnhanceGAN,这是一种基于对抗性学习的模型,可以执行自动图像增强。传统的图像增强框架通常涉及以完全监督的方式训练模型,这需要以对齐的图像对的形式进行昂贵的注释。与这些方法相比,我们提出的EnhanceGAN仅需要弱监督(图像美学质量上的二进制标签),并且能够学习用于基于美学的图像增强任务的增强操作符。特别地,我们展示了在弱监督下训练的分段颜色增强模块的有效性,并将所提出的EnhanceGAN框架扩展到学习基于深度过滤的美学增强器。我们的图像增强运算符的完全可区分性使得能够以端到端的方式训练EnhanceGAN。我们进一步证明了EnhanceGAN在学习基于美学的图像裁剪方面的能力,而没有任何地面裁剪对。我们的弱监督EnhanceGAN报告基于美学的色彩增强以及自动图像裁剪的竞争性定量结果,并且用户研究证实我们的图像增强结果与专业增强相比甚至更优选。

URL

https://arxiv.org/abs/1707.05251

PDF

https://arxiv.org/pdf/1707.05251.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot