Abstract
Automated detection of dementia stage using multimodal imaging modalities will be helpful for improving the clinical diagnosis. In this study, we develop the Inception-ResNet wrapper model in differentiating the healthy controls (HC), mild cognitive impairment (MCI), and Alzheimer’s disease (AD) using conjoint magnetic resonance imaging (MRI) and positron emission tomography (PET) scans. We use T1-weighted MR and PET images of individuals aged between 42 and 95 years, including HC, MCI and AD patients. We first perform 3D tissue segmentation of MR images after skull striping. The atlas-based segmented MR image tissue is fused with PET image. Then we transform PET images from RGB to HSI color space and apply fusion of MRI with PET images using two-dimensional Fourier and discrete wavelet transform (DWT) and then reconstruct the MR-PET fused image using inverse Fourier and DWT methods. After the fusion of MRI and PET imaging modalities, we used 60 % training, 20 % for validation and the remaining 20 % as a test set using various convolutional neural networks. We found the proposed model as the best classifier with an accuracy of 95.5 %, 94.1 % and 95.9 % in classifying HC vs MCI, MCI vs AD and AD vs HC respectively when compared to the existing methods. We conclude that the proposed deep learning model has potential in automated classification of healthy and dementia stages using combined MRI and PET modalities with good performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.