Abstract

Bilinear models such as low-rank and dictionary methods, which decompose dynamic data to spatial and temporal factor matrices are powerful and memory-efficient tools for the recovery of dynamic MRI data. Current bilinear methods rely on sparsity and energy compaction priors on the factor matrices to regularize the recovery. Motivated by deep image prior, we introduce a novel bilinear model, whose factor matrices are generated using convolutional neural networks (CNNs). The CNN parameters, and equivalently the factors, are learned from the undersampled data of the specific subject. Unlike current unrolled deep learning methods that require the storage of all the time frames in the dataset, the proposed approach only requires the storage of the factors or compressed representation; this approach allows the direct use of this scheme to large-scale dynamic applications, including free breathing cardiac MRI considered in this work. To reduce the run time and to improve performance, we initialize the CNN parameters using existing factor methods. We use sparsity regularization of the network parameters to minimize the overfitting of the network to measurement noise. Our experiments on free-breathing and ungated cardiac cine data acquired using a navigated golden-angle gradient-echo radial sequence show the ability of our method to provide reduced spatial blurring as compared to classical bilinear methods as well as a recent unsupervised deep-learning approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.