Abstract

Distributed Semantic Model (DSM) establishes their standards for expressing the meaning of words and sentences. DSM provides a quantitative measure of the two language representation is how closely related; it is not possible to automatically classify the different semantic relationships. Chinese semantic analysis methods and stacked two-way long in Word2Vec model, Long Short-Term Memory (stack LSTM) model. The Word2Vec model to capture the word's semantic features was transferred as a high-dimensional word vector and the first words and evaluated the performance of two typical Word2Vec model: Skip grams and Continuous Bag-Of-Words (CBOW). After that, it will use the LSTM models are stacked for feature extraction of continuous word vector. Therefore, the concept of similarity of meaning is not yet in the DSM. An effort to solve the problem of underspecification will introduce the evolution embedded system. Also, are a different kind of test of the career of automatic learning the words of these semantic relationships, will evaluate them both that there is no teacher in the teaching environment, the distribution model is, in many cases, in general, to find that it is possible to specify a high similarity score for its synonym, deep learning classifier is the best in recognition of semantic relationships.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.