Abstract

Fundamental frequency estimation is one of the most important issues in the field of speech processing. An accurate estimate of the fundamental frequency plays a key role in the field of speech and music analysis. So far, various methods have been proposed in the time- and frequency-domain. However, the main challenge is the strong noises in speech signals. In this paper, to improve the accuracy of fundamental frequency estimation, we propose a method for optimal nonlinear combination of fundamental frequency estimation methods, in noisy signals. In this method, to discriminate voiced frames from unvoiced frames in a better way, the Voiced/Unvoiced (V/U) scores of four pitch detection methods are combined with nonlinear fusion. These methods are: Autocorrelation (AC), Yin, YAAPT and SWIPE. After identifying the Voiced/Unvoiced label of each frame, the fundamental frequency (F 0 ) of the frame is estimated using the SWIPE method. The optimal function for nonlinear combination is determined using Multi-Layer Perceptron (MLP) neural network (NN). To evaluate the proposed method, 10 speech files (5 female and 5 male voices) are selected from the PTDB-TUG standard database and the results are presented in terms of GPE, VDE, PTE and FFE standard error criteria. The results indicate that our proposed method relatively reduced the aforementioned criteria (averaged in various SNRs) by 25.06%, 20.92%, 13.94%, and 25.94% respectively, which demonstrate the effectiveness of the proposed method in comparison to state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call