Abstract

An in-depth neural network-based approach is proposed to better develop an assessment model for English speech recognition and call quality assessment. By studying the structure of a deep nonlinear network, you can approximate complex functions, define distributed representations of input data, demonstrate a strong ability to learn important data set characteristics from some sample sets, and better simulate human brain analysis, and learning. The author uses in-depth learning technology to recognize English speech and has developed a speech recognition model with a deep belief network using the characteristics of the honey frequency centrum based on human hearing patterns. The test results show that examples include 210 machine and manual evaluations and 30 samples with first-grade differences. The overall compatibility level of the machine and human evaluation is 90.65%, and the adjacency consistency level is 90.65%. This is 100%, and the correlation coefficient is 0.798. We need to evaluate the quality of speech and pronunciation in English, which indicates a strong correlation between machine estimates and human estimates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call