Abstract

A hybrid model of Convolutional Neural Network (CNN) and self-attention has achieved remarkable results in text classification fields. In previous researches, text local semantics(captured by CNN) and global representation (extracted by self-attention) play equally important roles for each input. However, the importance of the two varies greatly with complex linguistic backgrounds. In this paper, we take an adaptive approach to automatically determine the contribution degree of each model to classification, according to specific structure and grammar information of the text. This strategy can make the most of two models. To better extract variable-size features of a word, multi-scale feature attention is introduced into our hybrid model. The attention focus on assigning larger weights to those multi-scale features that are important to a word. In addition, for fine-grained emotion classification tasks, a new type of loss function is also established. Experiment results show that, the proposed model makes noticeable improvements over hybrid models. Metrics are 0.3 to 1.5 percentages higher than previous methods. And results also prove that the new loss function further improves the performance of our model in all fine-grained emotion classification datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call