Abstract

With the development of artificial intelligence, more and more people hope that computers can understand human language through natural language technology, learn to think like human beings, and finally replace human beings to complete the highly difficult tasks with cognitive ability. As the key technology of natural language understanding, sentence representation reasoning technology mainly focuses on the sentence representation method and the reasoning model. Although the performance has been improved, there are still some problems such as incomplete sentence semantic expression, lack of depth of reasoning model, and lack of interpretability of the reasoning process. In this paper, a multi-layer semantic representation network is designed for sentence representation. The multi-attention mechanism obtains the semantic information of different levels of a sentence. The word order information of the sentence is also integrated by adding the relative position mask between words to reduce the uncertainty caused by word order. Finally, the method is verified on the task of text implication recognition and emotion classification. The experimental results show that the multi-layer semantic representation network can promote sentence representation’s accuracy and comprehensiveness.

Highlights

  • Natural language inference (NLI) has become one of the most important benchmark tasks in the field of natural language understanding because of its complex language understanding and in-depth information involved in reasoning [1]

  • The experimental results show that the multi-layer semantic representation network can promote sentence representation’s accuracy and comprehensiveness

  • For the task of text implication recognition, to avoid the interference caused by the reasoning process, to judge the performance of a multi-layer semantic network, a complete sentence representation reasoning model is formed by combining the general reasoning model [38] and multi-layer semantic representation network

Read more

Summary

Introduction

Natural language inference (NLI) has become one of the most important benchmark tasks in the field of natural language understanding because of its complex language understanding and in-depth information involved in reasoning [1]. It is assumed that sentences with similar context information usually have the same or similar semantic information Based on this assumption, a general sentence representation model, skip-thought, for learning high-quality sentence vectors is proposed. Shen [24] designed a bidirectional self-attention network Disan It uses a self-attention mechanism to encode sentence semantic scalar and uses multi-layer attention to expand the obtained semantic scalar to vector expression. This method’s advantage is that it does not need to use a neural network to code sentence semantics to avoid falling into the local optimum. This paper uses the combination of bidirectional long-term and short-term memory networks and a multi-attention mechanism to obtain the semantic information of different levels of the sentence. All the information is fused to form a complete sentence embedded representation

SNLI Dataset
Multi-NLI Dataset
Methods
Semantic Extraction Based on Bidirectional Long-Short Memory Network
Design of of Semantic
Natural Language Sentence Vectorization
Word2Char module
Embedded layer
Multi-Layer Semantic Information Extraction and Enhancement
Multi-layer information extraction extraction
Sentence Embedding Representation Generation
Experimental Steps
Evaluation Index
Parameter Setting
Experimental Result on SNLI Dataset
Experimental Results on Multi-NLI Dataset
Experimental Results on Yelp Dataset
Semantic Relevance Analysis
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call