Abstract

Brain computer interface (BCI) technology has a great deal of scientific interest with various application systems. An advancement that is increasingly relevant in the BCI is communicating with a speech in mind. Thus, the paper aims to develop a direct speech BCI (DS-BCI) system using short time-based features and an ANN classifier. Signal processing methods like Mel frequency cepstral coefficients (MFCC), Linear predictive cepstral coefficients (LPCC) and Sequency mapped real transform (SMRT) are utilized on a short time bases to extract base level features. Statistical parameters are then determined based on the ensemble average (EA) and time average (TA) to extract two reduced vectors in each method. Hybrid feature vectors like MLC, SMC and SLC are prepared by fusion of features from MFCC & LPCC, SMRT & MFCC and SMRT & LPCC, respectively, in both EA & TA analysis. Principal component analysis (PCA) is performed on hybrid feature vectors to derive uncorrelated components. The proposed method is evaluated on imagined EEG (EEG-i) & vocalized EEG (EEG-v) signals from the ‘Kara one’ database and presented classification accuracy of individual methods & hybrid methods. The results show that hybrid features SMC & SLC enhance the classification accuracy compared to the unique features. PCA analysis helps improve accuracy and reduce feature dimension. TA-based SMC features with PCA provide maximum accuracy as 77.37% and 62.52% for EEG-i and EEG-v signals, respectively. The proposed method outperforms the state-of-the-art algorithms discussed in the paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call