Abstract

This paper aims to explore unsupervised cross-lingual word representation learning methods with the specific task of acquiring a bilingual translation lexicon on a monolingual corpus. Specifically, an unsupervised cross-lingual word representation co-training scheme based on different word embedding models is first designed and outperforms the baseline model. In this paper, we adeptly tackles the obstacles encountered in higher education foreign language teaching and underscores the necessity for inventive teaching methods, and design and implement a linear self-encoder-based principal component acquisition scheme for the interpoint mutual information matrix obtained from a monolingual corpus. And on top of this, a collaborative training scheme based on linear self-encoder for cross-language word representation is designed to improve the learning effect of cross-language word embedding. The results of the study show that the most obvious rise in the pre and post tests of the experimental class in the practical application of the foreign language teaching model based on the method of this paper is the word sense guessing, which rose by 23.12%. Sentence meaning comprehension increased by 23.39%, main idea by 16.61%, factual details by 15.47%, and inferential judgment by 10.28%. Thus, the feasibility of the unsupervised cross-linguistic word representation learning collaborative training method is further verified.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call