Abstract

Alzheimer’s disease (AD) is a global health issue that predominantly affects older people. It affects one’s daily activities by modifying neural networks in the brain. AD is categorized by the death of neurons, the creation of amyloid plaques, and the development of neurofibrillary tangles. In clinical settings, an early diagnosis of AD is critical to limit the problems associated with it and can be accomplished using neuroimaging modalities, such as magnetic resonance imaging (MRI) and positron emission tomography (PET). Deep learning (DL) techniques are widely used in computer vision and related disciplines for various tasks such as classification, segmentation, detection, etc. CNN is a sort of DL architecture, which is normally useful to categorize and extract data in the spatial and frequency domains for image-based applications. Batch normalization and dropout are commonly deployed elements of modern CNN architectures. Due to the internal covariance shift between batch normalization and dropout, the models perform sub-optimally under diverse scenarios. This study looks at the influence of disharmony between batch normalization and dropout techniques on the early diagnosis of AD. We looked at three different scenarios: (1) no dropout but batch normalization, (2) a single dropout layer in the network right before the softmax layer, and (3) a convolutional layer between a dropout layer and a batch normalization layer. We investigated three binaries: mild cognitive impairment (MCI) vs. normal control (NC), AD vs. NC, AD vs. MCI, one multiclass AD vs. NC vs. MCI classification problem using PET modality, as well as one binary AD vs. NC classification problem using MRI modality. In comparison to using a large value of dropout, our findings suggest that using little or none at all leads to better-performing designs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call