Abstract
Model-based deep learning methods that combine imaging physics with learned regularization priors have been emerging as powerful tools for parallel MRI acceleration. The main focus of this paper is to determine the utility of the monotone operator learning (MOL) framework in the parallel MRI setting. The MOL algorithm alternates between a gradient descent step using a monotone convolutional neural network (CNN) and a conjugate gradient algorithm to encourage data consistency. The benefits of this approach include similar guarantees as compressive sensing algorithms including uniqueness, convergence, and stability, while being significantly more memory efficient than unrolled methods. We validate the proposed scheme by comparing it with different unrolled algorithms in the context of accelerated parallel MRI for static and dynamic settings.
Abstract (translated)
基于模型的深度学习方法,将图像物理学与学习后的正则化预处理相结合,正在成为并行MRI加速的强大工具。本文的主要焦点是确定Monotone Operator Learning (MOL)框架在并行MRI设置中的实用性。MOL算法在采用Monotone卷积神经网络(CNN)和Conjugate Gradient算法的梯度下降步骤之间交替进行,以鼓励数据一致性。这种方法的好处包括与压缩感知算法类似的保证,包括独特性、收敛和稳定性,而比展开方法更有效地利用了内存。我们验证了所提出的 scheme 的方法,并将其与不同的展开算法在加速静态和动态MRI设置中的并行加速比较。
URL
https://arxiv.org/abs/2304.01351