Implicit sentiment analysis (ISA) can be distinguished from traditional text sentiment analysis by the fact that it does not rely on emotional words as emotional clues, and the expression is usually vaguer. Identifying implicit emotions is more difficult, as it requires a deeper understanding of the context, even when emotional words are absent. Researchers have focused on context feature modeling and developing sophisticated feature extraction mechanisms instead of starting from the emotional perspective. Enhancing the difference in emotional features of text samples is an intuitive method to address this challenge. We proposed a supervised contrastive learning (SCL) method during training that enables the model to conduct contrastive learning based on emotion labels even while training on weak emotion features. SCL training can strengthen the average embedding distance between texts with different emotion labels and enhance implicit emotion discrimination. Moreover, research indicates that contextual information can improve implicit emotion classification ability. Therefore, we applied a straightforward context feature fusion method (bi-affine) over a more complicated context feature modeling approach. To evaluate the effectiveness of our proposed method, we conducted experiments on the SMP2019-ECISA (Chinese implicit sentiment analysis) dataset. The results show a 2.13% enhancement in the F1 value compared to the BERT baseline, proving the effectiveness of our methods.
Read full abstract