Abstract

Deep convolution neural network and its ensemble variant-based classification methods of P300 in the Devanagari script (DS)-based P300 speller (DS-P3S) have generated numerous training parameters. This is likely to increase the problems like computational complexity and overfitting. The recent attempts of researchers to overcome these problems are further deteriorating the accuracy due to the dense connectivity and channel-mix group convolution. Moreover, compressing the deep models in these attempts also found losing vital information. Therefore, to mitigate these problems, an efficient compact classification model called “DS-P3SNet” along with a knowledge distillation (KD) and transfer learning (TL) is proposed in this article. It includes: 1) extraction of rich morphological information across temporal region; 2) combination of channelwise and channel-mix-depthwise convolution (C2-DwCN) for efficient channel selection and extraction of spatial information with less number of trainable parameters; 3) channelwise convolution (Cw-CN) for classification to provide sparse connectivity; 4) knowledge distillation to reduce the tradeoff between accuracy and the number of trainable parameters; and 5) subject-to-subject transfer of learning to reduce subject variability. The trial-to-trial transfer of learning to reduce the tradeoff between the number of trials and accuracy. The experimentations were performed on a self-generated dataset of 20 words comprising of 79 DS characters collected from ten volunteer healthy subjects. An average accuracy of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$95.32~{\pm }~0.85$ </tex-math></inline-formula> % and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$94.64~{\pm }~0.68$ </tex-math></inline-formula> % were obtained for subject-dependent and subject-independent experiments, respectively. The trainable parameters were also reduced approximately by 2–34 times compared to existing models with improved or equivalent performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call