Abstract

Short text is the main carrier for people to express their ideas and opinions. And it is very important as well a big challenge to understand the meaning of short text or recognize the semantic patterns of different short texts. Most of existing methods use word embedding and short text interaction to learn short text pairs semantic patterns. However, some of these methods are complicated and cannot fully capture the relations of words and the interaction of short text pairs. To solve this problem, a self-attention based model, that is, Knowledge Learning for Matching Question (KLMQ) is proposed. Using part-of-speech to mine the relations of words, and the model obtains the relations of short texts from the grammar, syntax and morphology. Meanwhile, it adopts information fusion strategy to enhance the interaction between short text pairs, which ensures the model works well with the expanded information, such as the order of words, the correlation of words and the relations of short text pairs. To verify the correctness of the proposed model and the efficiency of the expanded information, extensive experiments were carried out on public datasets. Experimental results show that the model performs better than that of traditional neural network models and the expanded information can much improve the performances of intention multiple-representation recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.