Abstract
Abstract Recent human magnetic resonance imaging (MRI) studies continually push the boundaries of spatial resolution as a means to enhance levels of neuroanatomical detail and increase the accuracy and sensitivity of derived brain morphometry measures. However, acquisitions required to achieve these resolutions have a higher noise floor, potentially impacting segmentation and morphometric analysis results. This study proposes a novel, fast, robust, and resolution-invariant deep learning method to denoise structural human brain MRIs. We explore denoising of T1-weighted (T1w) brain images from varying field strengths (1.5T to 7T), voxel sizes (1.2mm to 250µm), scanner vendors (Siemens, GE, and Phillips), and diseased and healthy participants from a wide age range (young adults to aging individuals). Our proposed Fast-Optimized Network for Denoising through residual Unified Ensembles (FONDUE) method demonstrated stable denoising capabilities across multiple resolutions with performance on par or superior to the state-of-the-art methods while being several orders of magnitude faster at low relative cost when using a dedicated Graphics Processing Unit (GPU). FONDUE achieved the best performance on at least one of the four denoising-performance metrics on all the test datasets used, showing its generalization capabilities and stability. Due to its high-quality performance, robustness, fast execution times, and relatively low-GPU memory requirements, as well as its open-source public availability, FONDUE can be widely used for structural MRI denoising, especially in large-cohort studies. We have made the FONDUE repository and all training and evaluation scripts as well as the trained weights available at https://github.com/waadgo/FONDUE.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.