Abstract

In this paper, an improved long short-term memory (LSTM)-based deep neural network structure is proposed for learning variable-length Chinese sentence semantic similarities. Siamese LSTM, a sequence-insensitive deep neural network model, has a limited ability to capture the semantics of natural language because it has difficulty explaining semantic differences based on the differences in syntactic structures or word order in a sentence. Therefore, the proposed model integrates the syntactic component features of the words in the sentence into a word vector representation layer to express the syntactic structure information of the sentence and the interdependence between words. Moreover, a relative position embedding layer is introduced into the model, and the relative position of the words in the sentence is mapped to a high-dimensional space to capture the local position information of the words. With this model, a parallel structure is used to map two sentences into the same high-dimensional space to obtain a fixed-length sentence vector representation. After aggregation, the sentence similarity is computed in the output layer. Experiments with Chinese sentences show that the model can achieve good results in the calculation of the semantic similarity.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.