Abstract

BackgroundPartial volume correction with anatomical magnetic resonance (MR) images (MR-PVC) is useful for accurately quantifying tracer uptake on brain positron emission tomography (PET) images. However, MR segmentation processes for MR-PVC are time-consuming and prevent the widespread clinical use of MR-PVC. Here, we aimed to develop a deep learning model to directly predict PV-corrected maps from PET and MR images, ultimately improving the MR-PVC throughput.MethodsWe used MR T1-weighted and [11C]PiB PET images as input data from 192 participants from the Alzheimer’s Disease Neuroimaging Initiative database. We calculated PV-corrected maps as the training target using the region-based voxel-wise PVC method. Two-dimensional U-Net model was trained and validated by sixfold cross-validation with the dataset from the 156 participants, and then tested using MR T1-weighted and [11C]PiB PET images from 36 participants acquired at sites other than the training dataset. We calculated the structural similarity index (SSIM) of the PV-corrected maps and intraclass correlation (ICC) of the PV-corrected standardized uptake value between the region-based voxel-wise (RBV) PVC and deepPVC as indicators for validation and testing.ResultsA high SSIM (0.884 ± 0.021) and ICC (0.921 ± 0.042) were observed in the validation and test data (SSIM, 0.876 ± 0.028; ICC, 0.894 ± 0.051). The computation time required to predict a PV-corrected map for a participant (48 s without a graphics processing unit) was much shorter than that for the RBV PVC and MR segmentation processes.ConclusionThese results suggest that the deepPVC model directly predicts PV-corrected maps from MR and PET images and improves the throughput of MR-PVC by skipping the MR segmentation processes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.