Abstract

AbstractHearing impairment has become the most widespread sensory disorder in the world, obstructing human-to-human communication and comprehension. The EEG-based brain-computer interface (BCI) technology may be an important solution to rehabilitating their hearing capacity for people who are unable to sustain verbal contact and behavioral response by sound stimulation. Auditory evoked potentials (AEPs) are a kind of EEG signal produced by an acoustic stimulus from the brain scalp. This study aims to develop an intelligent hearing level assessment technique using AEP signals to address these concerns. First, we convert the raw AEP signals into the time–frequency image using the continuous wavelet transform (CWT). Then, the Support vector machine (SVM) approach is used for classifying the time–frequency images. This study uses the reputed publicly available dataset to check the validation of the proposed approach. This approach achieves a maximum of 95.21% classification accuracy, which clearly indicates that the approach provides a very encouraging performance for detecting the AEPs responses in determining human auditory level.KeywordsElectroencephalogram (EEG)Brain-computer interface (BCI)Auditory evoked potential (AEP)Machine learning

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call