Abstract
Photorealistic style transfer aims to transfer the style of one image to another, but preserves the original structure and detail outline of the content image, which makes the content image still look like a real shot after the style transfer. Although some realistic image styling methods have been proposed, these methods are vulnerable to lose the details of the content image and produce some irregular distortion structures. In this paper, we use a high-resolution network as the image generation network. Compared to other methods, which reduce the resolution and then restore the high resolution, our generation network maintains high resolution throughout the process. By connecting high-resolution subnets to low-resolution subnets in parallel and repeatedly multi-scale fusion, high-resolution subnets can continuously receive information from low-resolution subnets. This allows our network to discard less information contained in the image, so the generated images may have a more elaborate structure and less distortion, which is crucial to the visual quality. We conducted extensive experiments and compared the results with existing methods. The experimental results show that our model is effective and produces better results than existing methods for photorealistic image stylization. Our source code with PyTorch framework will be publicly available at https://github.com/limingcv/Photorealistic-Style-Transfer
Abstract (translated)
照片写实风格转换的目的是将一幅图像的风格转换到另一幅图像,但保留了内容图像的原始结构和细节轮廓,使内容图像在风格转换后仍然看起来像真实的照片。虽然已经提出了一些现实的图像造型方法,但这些方法容易丢失内容图像的细节,产生一些不规则的失真结构。本文采用高分辨率网络作为图像生成网络。与其他降低分辨率然后恢复高分辨率的方法相比,我们的发电网络在整个过程中保持高分辨率。通过将高分辨率子网与低分辨率子网并行、反复多尺度融合,高分辨率子网可以连续接收低分辨率子网的信息。这使得我们的网络可以丢弃图像中包含的信息更少,因此生成的图像可能具有更精细的结构和更少的失真,这对视觉质量至关重要。我们进行了大量的实验,并与现有的方法进行了比较。实验结果表明,该模型是有效的,并产生了比现有的方法更好的效果。我们的pytorch框架源代码将在https://github.com/limingcv/photorealistic-style-transfer上公开。
URL
https://arxiv.org/abs/1904.11617