Abstract

Our work aims to exploit deep learning (DL) models to automatically segment diagnostic regions involved in Alzheimer’s disease (AD) in 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) volumetric scans in order to provide a more objective diagnosis of this disease and to reduce the variability induced by manual segmentation. The dataset used in this study consists of 102 volumes (40 controls, 39 with established Alzheimer’s disease (AD), and 23 with established mild cognitive impairment (MCI)). The ground truth was generated by an expert user who identified six regions in original scans, including temporal lobes, parietal lobes, and frontal lobes. The implemented architectures are the U-Net3D and V-Net networks, which were appropriately adapted to our data to optimize performance. All trained segmentation networks were tested on 22 subjects using the Dice similarity coefficient (DSC) and other similarity indices, namely the overlapping area coefficient (AOC) and the extra area coefficient (EAC), to evaluate automatic segmentation. The results of each labeled brain region demonstrate an improvement of 50%, with DSC from about 0.50 for V-Net-based networks to about 0.77 for U-Net3D-based networks. The best performance was achieved by using U-Net3D, with DSC on average equal to 0.76 for frontal lobes, 0.75 for parietal lobes, and 0.76 for temporal lobes. U-Net3D is very promising and is able to segment each region and each class of subjects without being influenced by the presence of hypometabolic regions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call