Abstract

Deep learning-based models have greatly advanced the performance of speech enhancement (SE) systems. However, two problems remain unsolved, which are closely related to model generalizability to noisy conditions: (1) mismatched noisy condition during testing, i.e., the performance is generally sub-optimal when models are tested with unseen noise types that are not involved in the training data; (2) local focus on specific noisy conditions, i.e., models trained using multiple types of noises cannot optimally remove a specific noise type even though the noise type has been involved in the training data. These problems are common in real applications. In this article, we propose a novel denoising autoencoder with a multi-branched encoder (termed DAEME) model to deal with these two problems. In the DAEME model, two stages are involved: training and testing. In the training stage, we build multiple component models to form a multi-branched encoder based on a decision tree (DSDT). The DSDT is built based on prior knowledge of speech and noisy conditions (the speaker, environment, and signal factors are considered in this paper), where each component of the multi-branched encoder performs a particular mapping from noisy to clean speech along the branch in the DSDT. Finally, a decoder is trained on top of the multi-branched encoder. In the testing stage, noisy speech is first processed by each component model. The multiple outputs from these models are then integrated into the decoder to determine the final enhanced speech. Experimental results show that DAEME is superior to several baseline models in terms of objective evaluation metrics, automatic speech recognition results, and quality in subjective human listening tests.

Highlights

  • S PEECH enhancement (SE) aims to improve the quality and intelligibility of distorted speech signals, which may be caused by background noises, interference and recording devices

  • The results indicate that denoising autoencoder with multi-branched encoder (DAEME) has a better generalization ability to unseen noise types than other models compared in this paper

  • We analyzed and confirmed that the decoder using convolutional neural network (CNN)-based non-linear transformation yielded better SE performance than the decoder using 2-layered fully connected network, linear transformation and the BF approach

Read more

Summary

INTRODUCTION

Jonathan Sherman is with Taiwan International Graduate Program (TIGP), Academia Sinica, Taipei, Taiwan. In real-world applications, it is not guaranteed that an SE system always deals with seen noise types This may limit the applicability of deep learning-based SE methods. We intend to design a new framework to increase deep learning SE model generalizability, i.e., to improve the enhancement performance for both seen and unseen noise types. In building the DSDT, we regard the speaker gender and signal-to-noise ratio (SNR) as the utterance-level attributes and the low and high frequency components as the signal-level attributes Based on these definitions, the training data set is partitioned into several clusters based on the degree of attributes desired.

RELATED WORKS
SE using a linear regression function
SE using non-linear regression functions
Ensemble learning
Training stage
Testing stage
EXPERIMENTS & RESULTS
Evaluation metrics
Experiments on the WSJ dataset
Prior knowledge of speech and noise structures
Experiments on the TMHINT dataset
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call