Abstract

For short-texts with few words and sparse features, it becomes very important to use text context semantic information to enrich its text representation. At present, the text representation method of static word vectors is the same for the same word in different sentences. From a certain perspective, contextual semantic information is not fully utilized. Therefore, this paper proposes to use the combination of BERT and ELMo to mine more comprehensive contextual semantic information from different dimensions to enrich the text representation. The BERT model fully describes the characteristics of word level, sentence level and relationship between sentences. The ELMo model can solve the problem of polysemy. LSTM and Attention Mechanism are used to extract high-dimensional text features, and CRF is used to recognize intentions. Experiments show that the multi-dimensional dynamic word vector intention recognition model (MD-Intention) proposed in this paper has achieved good performance on both the ATIS-2 dataset and the SNIPS dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call