Abstract
Existing image restoration approaches typically employ extensive networks specifically trained for designated degradations. Despite being effective, such methods inevitably entail considerable storage costs and computational overheads due to the reliance on task-specific networks. In this work, we go beyond this well-established framework and exploit the inherent commonalities among image restoration tasks. The primary objective is to identify components that are shareable across restoration tasks and augment the shared components with modules specifically trained for individual tasks. Towards this goal, we propose AdaIR, a novel framework that enables low storage cost and efficient training without sacrificing performance. Specifically, a generic restoration network is first constructed through self-supervised pre-training using synthetic degradations. Subsequent to the pre-training phase, adapters are trained to adapt the pre-trained network to specific degradations. AdaIR requires solely the training of lightweight, task-specific modules, ensuring a more efficient storage and training regimen. We have conducted extensive experiments to validate the effectiveness of AdaIR and analyze the influence of the pre-training strategy on discovering shareable components. Extensive experimental results show that AdaIR achieves outstanding results on multi-task restoration while utilizing significantly fewer parameters (1.9 MB) and less training time (7 hours) for each restoration task. The source codes and trained models will be released.
Abstract (translated)
现有的图像修复方法通常采用专门为指定贬值而训练的广泛网络。尽管这些方法有效,但由于依赖任务特定的网络,这种方法不可避免地导致相当大的存储成本和计算开销。在本文中,我们超越了这个经过充分验证的框架,并探讨了图像修复任务中固有的共同点。主要目标是确定可以在多个修复任务中共享的组件,并针对每个任务专门训练模块。为了实现这一目标,我们提出了AdaIR,一种新颖的框架,可以在不牺牲性能的情况下实现低存储成本和高效训练。具体来说,通过使用合成降噪进行自监督预训练,构建了一个通用的修复网络。在预训练阶段之后,我们训练适配器将预训练网络适应特定的降噪。AdaIR仅需要对轻量级、任务特定的模块进行训练,从而确保更高效的存储和训练计划。我们进行了广泛的实验来验证AdaIR的有效性并分析预训练策略对发现可共享组件的影响。大量的实验结果表明,AdaIR在多任务修复方面取得了出色的成绩,同时使用显著更少的参数(1.9 MB)和更短的学习时间(7小时)来完成每个修复任务。源代码和训练好的模型将发布。
URL
https://arxiv.org/abs/2404.11475