Abstract

In this paper, we conduct in‐depth research and analysis on the intelligent recognition and teaching of English fuzzy text through parallel projection and region expansion. Multisense Soft Cluster Vector (MSCVec), a multisense word vector model based on nonnegative matrix decomposition and sparse soft clustering, is constructed. The MSCVec model is a monolingual word vector model, which uses nonnegative matrix decomposition of positive point mutual information between words and contexts to extract low‐rank expressions of mixed semantics of multisense words and then uses sparse. It uses the nonnegative matrix decomposition of the positive pointwise mutual information between words and contexts to extract the low‐rank expressions of the mixed semantics of the polysemous words and then uses the sparse soft clustering algorithm to partition the multiple word senses of the polysemous words and also obtains the global sense of the polysemous word affiliation distribution; the specific polysemous word cluster classes are determined based on the negative mean log‐likelihood of the global affiliation between the contextual semantics and the polysemous words, and finally, the polysemous word vectors are learned using the Fast text model under the extended dictionary word set. The advantage of the MSCVec model is that it is an unsupervised learning process without any knowledge base, and the substring representation in the model ensures the generation of unregistered word vectors; in addition, the global affiliation of the MSCVec model can also expect polysemantic word vectors to single word vectors. Compared with the traditional static word vectors, MSCVec shows excellent results in both word similarity and downstream text classification task experiments. The two sets of features are then fused and extended into new semantic features, and similarity classification experiments and stack generalization experiments are designed for comparison. In the cross‐lingual sentence‐level similarity detection task, SCLVec cross‐lingual word vector lexical‐level features outperform MSCVec multisense word vector features as the input embedding layer; deep semantic sentence‐level features trained by twin recurrent neural networks outperform the semantic features of twin convolutional neural networks; extensions of traditional statistical features can effectively improve cross‐lingual similarity detection performance, especially cross‐lingual topic model (BL‐LDA); the stack generalization integration approach maximizes the error rate of the underlying classifier and improves the detection accuracy.

Highlights

  • The important means of education informatization is to apply information technology and network technology to education to realize the mode of “Internet + education” [1]

  • When the database is used as the test unit, the proposed method shows good recognition performance in the task of document target constraint recognition, and the results of each performance index of each database are above 75.00%, and most of the indexes are above 80.00%, which proves the effectiveness and feasibility of this method

  • From the above comparison results, due to the corresponding improvement of the text feature extraction model, the overall research scheme proposed in this paper achieves better results in each recognition task, and the weighted average value of the F1 value of each database is above 70%, with the lowest being 70.17% and the highest being 86.78%, and all of them are significantly higher than the bag-of-words model and TF-IDF model, and the comprehensive acceptability

Read more

Summary

Introduction

The important means of education informatization is to apply information technology and network technology to education to realize the mode of “Internet + education” [1]. In addition to the above roles of text similarity detection, its direct application has considerable practical value [2] It has a significant role in protecting the intellectual property rights of electronic texts and combating illegal copying and plagiarism of academic results. The relevant research on relation extraction is relatively adequate, based on the above analysis and introduction, the extraction of semantic relations from short English texts is highly differentiated from the classical relation extraction task. Studying this topic is of great importance both at the algorithmic level and at the application level

Current Status of Research
Analysis of Results
F Significance
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.