Abstract

PurposeThis study developed a data-driven optimization to improve the accuracy of deep learning QSM quantification. MethodsThe proposed deep learning QSM pipeline consisted of two projections onto convex set (POCS) models designed to decouple trainable network components with the spherical mean value (SMV) filters and dipole kernel in the data-driven optimization. They were a background field removal network (named POCSnet1) and a dipole inversion network (named POCSnet2). Both POCSnet1 and POCSnet2 were the unrolled V-Net with iterative data-driven optimization to enforce the data fidelity. For training POCSnet1, we simulated phantom data with random geometric shapes as the background susceptibility sources. For training POCSnet2, we used geometric shapes to mimic the QSM. The evaluation was performed on synthetic data, a public COSMOS (N = 1), and clinical data from a Parkinson's disease cohort (N = 71) and small-vessel disease cohort (N = 26). For comparison, DLL2, FINE, and autoQSM, were implemented and tested under the same experimental setting. ResultsOn COSMOS, results from POCSnet1 were more similar to that of the V-SHARP method with NRMSE = 23.7% and SSIM = 0.995, compared with the NRMSE = 62.7% and SSIM = 0.975 for SHARQnet, a naïve V-Net model. On COSMOS, the NRMSE and HFEN for POCSnet2 were 58.1% and 56.7%; while for DLL2, FINE, and autoQSM, they were 62.0% and 61.2%, 69.8% and 67.5%, and 87.5% and 85.3%, respectively. On the Parkinson's disease cohort, our results were consistent with those obtained from VSHARP+STAR-QSM with biases <3% and outperformed the SHARQnet+DeepQSM that had biases of 7% to 10%. The sensitivity of cerebral microbleed detection using our pipeline was 100%, compared with 92% by SHARQnet+DeepQSM. ConclusionData-driven optimization improved the accuracy of QSM quantification compared with that of naïve V-Net models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.