Abstract
The use of deep unfolding networks in compressive sensing (CS) has seen wide success as they provide both simplicity and interpretability. However, since most deep unfolding networks are iterative, this incurs significant redundancies in the network. In this work, we propose a novel recursion-based framework to enhance the efficiency of deep unfolding models. First, recursions are used to effectively eliminate the redundancies in deep unfolding networks. Secondly, we randomize the number of recursions during training to decrease the overall training time. Finally, to effectively utilize the power of recursions, we introduce a learnable unit to modulate the features of the model based on both the total number of iterations and the current iteration index. To evaluate the proposed framework, we apply it to both ISTA-Net+ and COAST. Extensive testing shows that our proposed framework allows the network to cut down as much as 75% of its learnable parameters while mostly maintaining its performance, and at the same time, it cuts around 21% and 42% from the training time for ISTA-Net+ and COAST respectively. Moreover, when presented with a limited training dataset, the recursive models match or even outperform their respective non-recursive baseline. Codes and pretrained models are available at this https URL .
Abstract (translated)
深度展开网络在压缩感知(CS)中广泛应用,因为它们既简单又易于解释。然而,由于大多数深度展开网络是迭代的,因此在网络中造成了巨大的冗余。在本研究中,我们提出了一种基于递归的新框架,以增强深度展开模型的效率。我们首先利用递归有效地消除深度展开网络中的冗余。其次,我们在训练期间随机化递归次数,以减少整个训练时间。最后,为了有效地利用递归的力量,我们引入一个可学习单元,根据迭代次数的总数量和当前迭代指数来调节模型的特征。为了评估所提出的框架,我们将其应用于ISTA-Net+和COAST。广泛的测试表明,我们提出的框架可以使网络最大限度地减少其可学习参数中的冗余,而大部分时间仍然保持其表现,同时,它分别减少了ISTA-Net+和COAST的训练时间中的约21%和42%。此外,当面对有限的训练数据时,递归模型几乎与或甚至超越了其非递归基线。代码和预训练模型可在本网站上https://url.com获得。
URL
https://arxiv.org/abs/2305.05505