Abstract

Two convolution neural network (CNN) models are introduced to accurately classify event-related potentials (ERPs) by fusing frequency, time, and spatial domain information acquired from the continuous wavelet transform (CWT) of the ERPs recorded from multiple spatially distributed channels. The multidomain models fuse the multichannel Z-scalograms and the V-scalograms, which are generated from the standard CWT scalogram by zeroing-out and by discarding the inaccurate artifact coefficients that are outside the cone of influence (COI), respectively. In the first multidomain model, the input to the CNN is generated by fusing the Z-scalograms of the multichannel ERPs into a frequency-time-spatial cuboid. The input to the CNN in the second multidomain model is formed by fusing the frequency-time vectors of the V-scalograms of the multichannel ERPs into a frequency-time-spatial matrix. Experiments are designed to demonstrate (a) customized classification of ERPs, where the multidomain models are trained and tested with the ERPs of individual subjects for brain-computer interface (BCI)-type applications, and (b) group-based ERP classification, where the models are trained on the ERPs from a group of subjects and tested on single subjects not included in the training set for applications such as brain disorder classification. Results show that both multidomain models yield high classification accuracies for single trials and small-average ERPs with a small subset of top-ranked channels, and the multidomain fusion models consistently outperform the best unichannel classifiers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call