Abstract

Considering that when the traditional network structure is connected with the pre-trained language model for sentence embedding representation, it relies on simple word weights to generate sentence vectors, which easily ignores the global and contextual semantic details of sentences, reducing the accuracy of sentence representation. To address these issues, we develop a Multidirectional Attention Interaction Construction-Bert Sentence Representation Framework (MAI-CBert). First, the model uses deep attention to transform the input emotional sentences to reduce misunderstanding caused by the unbalanced distribution of word weights in the sentences; at the same time, it applies horizontal and vertical attention to emotional constructions, which are generated from two different directions. Construction vectors, aim to focus on words with salient features in sentences and establish close dependencies. Second, the dynamic interaction strategy is adopted to realize the interaction between attention in different directions, so that the information flow forms a complementary relationship, resulting in more effective sentence vectors. Notably, the loss function is refactored to improve the representation robustness of the model to avoid catastrophic forgetting. Experimental results on the SemEval-14 and ACL-14 baseline datasets demonstrate that the MAI-CBert sentence representation framework is robust and competitive.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call