Distributed Semantic Model (DSM) establishes their standards for expressing the meaning of words and sentences. DSM provides a quantitative measure of the two language representation is how closely related; it is not possible to automatically classify the different semantic relationships. Chinese semantic analysis methods and stacked two-way long in Word2Vec model, Long Short-Term Memory (stack LSTM) model. The Word2Vec model to capture the word's semantic features was transferred as a high-dimensional word vector and the first words and evaluated the performance of two typical Word2Vec model: Skip grams and Continuous Bag-Of-Words (CBOW). After that, it will use the LSTM models are stacked for feature extraction of continuous word vector. Therefore, the concept of similarity of meaning is not yet in the DSM. An effort to solve the problem of underspecification will introduce the evolution embedded system. Also, are a different kind of test of the career of automatic learning the words of these semantic relationships, will evaluate them both that there is no teacher in the teaching environment, the distribution model is, in many cases, in general, to find that it is possible to specify a high similarity score for its synonym, deep learning classifier is the best in recognition of semantic relationships.