Abstract

This study aimed to construct a novel multimodal model by fusing different electroencephalogram (EEG) data sources, which were under neutral, negative and positive audio stimulation, to discriminate between depressed patients and normal controls. The EEG data of different modalities were fused using a feature-level fusion technique to construct a depression recognition model. The EEG signals of 86 depressed patients and 92 normal controls were recorded simultaneously while receiving different audio stimuli. Then, from the EEG signals of each modality, linear and nonlinear features were extracted and selected to obtain features of each modality. In addition, a linear combination technique was used to fuse the EEG features of different modalities to build a global feature vector and find several powerful features. Furthermore, genetic algorithms were used to perform feature weighting to improve the overall performance of the recognition framework. The classification accuracy of each classifier, namely the k-nearest neighbor (KNN), decision tree (DT), and support vector machine (SVM), was compared, and the results were encouraging. The highest classification accuracy of 86.98% was obtained by the KNN classifier in the fusion of positive and negative audio stimuli, demonstrating that the fusion modality could achieve higher depression recognition accuracy rate compared with the individual modality schemes. This study may provide an additional tool for identifying depression patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call