Abstract

Mixture of experts (ME) is a modular neural network architecture for supervised learning. This paper illustrates the use of ME network structure to guide model selection for classification of electroencephalogram (EEG) signals. Expectation-maximization (EM) algorithm was used for training the ME so that the learning process is decoupled in a manner that fits well with the modular structure. The EEG signals were decomposed into time–frequency representations using discrete wavelet transform and statistical features were calculated to depict their distribution. The ME network structure was implemented for classification of the EEG signals using the statistical features as inputs. To improve classification accuracy, the outputs of expert networks were combined by a gating network simultaneously trained in order to stochastically select the expert that is performing the best at solving the problem. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified with the accuracy of 93.17% by the ME network structure. The ME network structure achieved accuracy rates which were higher than that of the stand-alone neural network models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call