Abstract

Based on locness corpus, this paper uses Wordsmith 6.0, SPSS 24, and other software to explore the use of temporal connectives in Japanese writing by Chinese Japanese learners. This paper proposes a method of tense classification based on the Japanese dependency structure. This method analyzes the results of the syntactic analysis of Japanese dependence and combines the tense characteristics of the target language to extract tense-related information and construct a maximum entropy tense classification model. The model can effectively identify the tense, and its classification accuracy shows the effectiveness of the classification method. This paper proposes a temporal feature extraction algorithm oriented to the hierarchical phrase expression model. The end-to-end speech recognition system has become the development trend of large-scale continuous speech recognition because of its simplicity and efficiency. In this paper, the end-to-end technology based on link timing classification is applied to Japanese speech recognition. Taking into account the characteristics of Japanese hiragana, katakana, and Japanese kanji writing forms, through experiments on the Japanese data set, different suggestions are explored. The final effect is better than mainstream speech recognition systems based on hidden Markov models and two-way long and short-term memory networks. This algorithm can extract the temporal characteristics of rules that meet certain conditions while extracting expression rules. These tense characteristics can guide the selection of rules in the expression process, make the expression results more in line with linguistic knowledge, and ensure the choice of relevant vocabulary and the structural ordering of the language. Through the analysis of time series and static information, we combine the time and space dimensions of the network structure. Using connectionist temporal classification (CTC) technology, an end-to-end speech recognition method for pronunciation error detection and diagnosis tasks is established. This method does not require phonemic information nor does it require forced alignment. The extended initials and finals are the error primitives, and 64 types of errors are designed. The experimental results show that the method can effectively detect the wrong pronunciation, the detection accuracy rate is 87.07%, the false rejection rate is 7.83%, and the error rate is 87.07%. The acceptance rate is 25.97%. This method uses network information more comprehensively than traditional methods, and the model is more effective. After detailed experiments, this article evaluates the prediction effect of this method and previous methods on the data set. This method improves the prediction accuracy by about 15% and achieves the expected goal of the work in this paper.

Highlights

  • Statistical language expression is one of the challenging frontier topics in the field of natural language processing, which has a wide range of application value and important commercial application prospects [1]

  • This paper proposes a method of integrating tense characteristics in statistical expressions

  • This paper studies the endto-end technology based on the self-attention mechanism and link timing classification and builds a complete speech recognition system on the Japanese data set

Read more

Summary

Introduction

Statistical language expression is one of the challenging frontier topics in the field of natural language processing, which has a wide range of application value and important commercial application prospects [1]. The existing language expression methods are still mainly limited to the use of rules to solve tense problems. Human memory will continue to cycle invisible in the body, affecting our behavior without showing a complete appearance, and information will circulate in the hidden state of the recursive network This method uses a deep control gating function to connect multilayer LSTM units and introduces linear correlation between the upper and lower layers in the recurrent neural network, which can build a deeper voice model. The model is compared to verify the effectiveness of the deep LSTM neural network [14, 15] in speech recognition

Related Work
B Link 1 Link 2
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call