Abstract

Speech signal often gets corrupted by different noises like airport noise, station noise, and street noise. These noises tend to degrade the quality of the speech signal, particularly in voice communication, automatic speech recognition, and speaker identification. Therefore, it is necessary for automatic speech enhancement. In this research work, a novel speech signal enhancement model is introduced with the assistance of deep learning. The proposed model includes three major phases: (a) pre-processing, (b) feature extraction, and (c) speech enhancement. In the pre-processing phase, the framing will be carried out using the Hanning window, where the input speech signals will be decomposed into a series of overlapping frames. Then, from these individual frames, the multi-features like the improved Mel-frequency cepstral coefficients (IMFCCs), fractional delta AMS, and modified STFT (M-STFT) will be extracted. Subsequently, in the speech enhancement phase, the available noise is estimated initially, and it is removed. The noise removed signals from the frames are used to determine the optimal mask of all the frames of the noisy speech signal, and the mask is employed for training the Deep Convolutional Neural Network (DCNN). The reconstructed outcomes from DCNN are the enhanced speech signal. Finally, the proposed work (multi-features+ DCNN-based Speech Enhancement) is validated over existing models in terms of certain measures, which exhibits the supremacy of the proposed work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call