Electroencephalogram (EEG) signals represent various wave patterns that assist in identification of normal & abnormal brain activities. These wave patterns consist of alpha waves, which indicate relaxation phases, beta waves, that indicate normal brain rhythms & can be disturbed due to cortical & other damages, delta waves, which are dominant in infants during sleep, & theta waves, which represent irregular metabolic & hydrocephalus activities in adults. A combination of these waves is capable of representing multiple brain conditions for both adolescents & adults. Various models have been proposed by researchers to analyze these signals, and most of them work on single or bi-domain features, which limits their classification performance. Most of these models are also static, and do not incorporate continuous feedback & incremental processes. Due to which their precision & recall performance is either constant or reduces w.r.t. newer evaluations. To overcome these limitations, this text proposes design of a novel EEG classification model that uses Q-Learning for classification of multispectral feature sets. The model extracts Mel Frequency Cepstral Coefficients (MFCC), and iVector features from raw EEG data, which assists in multispectral representation of these signals. The extracted features are classified via Q-Learning based Recurrent Neural Network (RNN) classifier, that combines Gated Recurrent Unit (GRU), and Long-Short-Term Memory (LSTM) based feature sets. Due to extraction of MFCC, the GRU & LSTM Models are able to identify power spectral variation, cepstral variations, spectrogram patterns, DCT (Discrete Cosine Transform) variations, etc. While, due to iVector, entropy variations are recognized and processed for better accuracy levels. Thus, both LSTM & GRU Models assist in augmenting extracted features, which improves feature variance for better classification performance. Results of these classifications are feedback into the training set via a correlation-based analysis layer, that assists in continuously improving precision & recall performance for different evaluations. Due to incorporation of this layer, the model is capable of improving precision by 8.5%, recall by 8.3%, and accuracy by 4.9% when compared with various state-of-the-art models. It was also observed that this performance incrementally improves w.r.t. number of evaluations, which assists in deploying the model for real-time applications.
Read full abstract