Abstract

Earlier studies have utilized microphone signal processing for performing the target speech evaluation and separation-like feature recognition by involving a huge amount of training data along with supervised machine learning. These approaches are most appropriate for stationary noise suppression; however, it is challenging for non-stationary noise, and it does not satisfy the practical processing requirement. To overcome the former issue, a system is presented for “joint speaker separation, and noise suppression”, referred to as the Optimized Binaural Enhancement via Attention Masking Network (OBEAMNET). In this paper, a speech separation model for a hearing aid is suggested. Here, the noise speech is separated into two different speeches through the proposed OBEAMNET approach. Initially, the standard input audio signals are gathered from standard benchmark datasets. Further, the cepstral features are gathered from the input data for getting the essential information. A novel Fitness Ordered Black Widow Optimization (FO-BWO) algorithm is developed for optimizing the features to increase the PSNR as well as decrease the RMSE. The feature selection is carried out by FO-BWO to get the most noteworthy information from audio signals. The suggested FO-BWO, along with the OBEAMNET technique, is used for efficiently separating the signals. Finally, the optimal features are fed to the speech separation using the OBEAMNET approach for improving the effectiveness of the proposed model that is observed in decoding auditory attention in noisy practical environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call