Abstract

With the development of speech signal processing, universality, easy collection and personal speech signal uniqueness, many researchers are attracted to the field of speech verification. Most of the current speech verifications are based on long training data sets in order to achieve good results, and there are no good verification schemes in case of inadequate training data sets. This paper proposes a novel architecture for speech verification using a multilevel method, which extracts feature parameters through a multiple wavelet transform for mobile phone voice. The experiments show that the multilevel wavelet authentication architecture improves performance in speech verification. The recognition rate of the mobile phone system is more robust and superior to other methods.

Highlights

  • In recent years, more and more researchers have been interested in the application of biometric technology to identification and verification

  • 2) EXPERIMENTS ON FEATURE EXTRACTION This paper is mainly focused on the study of linear predictive cepstral coefficient (LPCC), MFCC, an algorithm based on the wavelet transform and the similarity-matching algorithm of dynamic time warping (DTW)

  • We proposed a novel multilevel architecture for speech verification on mobile devices

Read more

Summary

INTRODUCTION

More and more researchers have been interested in the application of biometric technology to identification and verification. (1) Fully studied the related work, and proposed a new architecture and application on the speech verification for mobile terminals. (3) Through the self-built speech database experiment, it has proved the superiority of the speech effective component extraction algorithm based on short time energy It can effectively extract the effective voice segment in the speech signal segment, remove the invalid noise segment, reduce the error rate. It fully proves the reliability and excellent performance of the algorithm. (4) A multilevel speech verification architecture is proposed, which effectively combines the speech features of each layer of the wavelet feature extraction algorithm to improve the overall recognition rate of speech data based on small data.

RELATED WORK
BACKGROUND
FEATURE PARAMETERS BASED ON WAVELET DECOMPOSITION
COMPARED ALGORITHMS AND EXPERIMENTAL RESULTS
SPEECH DATABASE
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.