Abstract

BackgroundThe use of multi-modal features for improving the diagnosing accuracy of Parkinson's disease (PD) is still under consideration. MethodEarly diagnosis of PD is very crucial for better management and treatment planning of PD, as the delay in the diagnosis may even lead to death of the patient. Two frameworks, feature-level and modal-level, both of which are based on deep learning, are presented to classify the given subjects into PD and healthy by using neuroimaging (T1 weighted MRI scans and SPECT) and biological (CSF) features as the dataset. In the feature-level framework, all these features are integrated to form a heterogeneous dataset which is then supplied to two deep learning models to diagnose PD. In the modal-level framework, the number of features from T1 weighted MRI scans is reduced first, by using the filter feature selection method ReliefF. Those reduced number of features from the MRI scans are integrated with SPECT and CSF features to form another heterogeneous dataset, which is then fed to a deep learning model. ResultsDue to imbalanced nature of the dataset (consists of 73 PD and 59 healthy subjects), F1-score, geometric-mean, sensitivity, and specificity are measured, in addition to measuring the accuracy, to evaluate the performance of the developed models. A maximum accuracy of 93.33% and 92.38% is observed, for CNN, in the feature-level framework and modal-level framework, respectively. ConclusionsThough the complexity of the approach based on multi-modal features is high as compared to an approach that uses only one type of feature, i.e., either neuroimaging or biological; the results prove that the approach based on multi-modal features is useful for classifying the given subjects into PD and healthy and can help the clinicians in accurately diagnosing the PD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call