Abstract
Tremendous advances in image restoration tasks such as denoising and super-resolution have been achieved using neural networks. Such approaches generally employ very deep architectures, large number of parameters, large receptive fields and high nonlinear modeling capacity. In order to obtain efficient and fast image restoration networks one should improve upon the above mentioned requirements. In this paper we propose a novel activation function, the multi-bin trainable linear unit (MTLU), for increasing the nonlinear modeling capacity together with lighter and shallower networks. We validate the proposed fast image restoration networks for image denoising (FDnet) and super-resolution (FSRnet) on standard benchmarks. We achieve large improvements in both memory and runtime over current state-of-the-art for comparable or better PSNR accuracies.
Abstract (translated)
使用神经网络已经实现了诸如去噪和超分辨率的图像恢复任务的巨大进步。这些方法通常采用非常深的架构,大量参数,大的感受野和高的非线性建模能力。为了获得有效和快速的图像恢复网络,应该改进上述要求。 在本文中,我们提出了一种新的激活函数,即多仓可训练线性单元(MTLU),用于增加非线性建模能力以及更轻和更浅的网络。我们在标准基准测试中验证了所提出的用于图像去噪(FDnet)和超分辨率(FSRnet)的快速图像恢复网络。对于可比较或更好的PSNR精度,我们在内存和运行时方面都取得了相当大的现有技术水平的最大改进。
URL
https://arxiv.org/abs/1807.11389