Abstract

Hearing deficiency is the world’s most common sensation of impairment and impedes human communication and learning. One of the best ways to solve this problem is early and successful hearing diagnosis using electroencephalogram (EEG). Auditory Evoked Potential (AEP) seems to be a form of EEG signal with an auditory stimulus produced from the cortex of the brain. This study aims to develop an intelligent system of auditory sensation to analyze and evaluate the functional reliability of the hearing to solve these problems based on the AEP response. We create deep learning frameworks to enhance the training process of the deep neural network in order to achieve highly accurate hearing deficit diagnoses. In this study, a publicly available AEP dataset has been used and the responses have been obtained from the five subjects when the subject hears the auditory stimulus in the left or right ear. First, through a wavelet transformation, the raw AEP data is transformed into time-frequency images. Then, to remove lower-level functionality, a pre-trained network is used. Then the labeled images of time-frequency are then used to fine-tune the neural network architecture’s higher levels. On this AEP dataset, we have achieved 92.7% accuracy. The proposed deep CNN architecture provides better outcomes with fewer learnable parameters for hearing loss diagnosis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.