Abstract

This study develops a deep learning method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment. It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction (NR) algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.
Method: Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set hadn't seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with hearing impairment, listening to competing talkers amidst background noise.
Results: Using 1-second classification windows, DCNN models achieve accuracy (ACC) of 69.8\%, 73.3\% and 82.9\% and area-under-curve (AUC) of 77.2\%, 80.6\% and 92.1\% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9\%, 80.1\% and 97.5\%, along with AUC of 94.6\%, 89.1\%, and 99.8\%. Our DCNN models show good performance on short 1-second EEG samples, making them suitable for real-world applications.
Conclusion: Our DCNN models successfully addressed three tasks with short 1-second EEG windows from participants with hearing impairment, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.
Significance: Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative deep learning architectures and their potential constraints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call