Abstract

In this paper, an exact subspace method for fundamental frequency estimation is presented. The method is based on the principles of the MUSIC algorithm, wherein the orthogonality between the signal and and noise subspace is exploited. Unlike the original MUSIC algorithm, the new method uses an exact measure of the angles between the subspaces. This makes a difference, for example, when the fundamental frequency is low, for real signals, or when the number of samples is low. In Monte Carlo simulations, the performance of the new method is compared to a number of state-of-the-art methods and is demonstrated to lead to improvements in certain, critical cases. Moreover, it is demonstrated on a speech signal that the method can be applied to speech signals and is robust towards noise.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.