Abstract
Textual Emotion Recognition (TER) is an important task in Natural Language Processing (NLP), due to its high impact in real-world applications. Prior research has tackled the automatic classification of emotion expressions in text by maximising the probability of the correct emotion class using cross-entropy loss. However, this approach does not account for intra- and inter-class variations within and between emotion classes. To overcome this problem, we introduce a variant of triplet centre loss as an auxiliary task to emotion classification. This allows TER models to learn compact and discriminative features. Furthermore, we introduce a method for evaluating the impact of intra- and inter-class variations on each emotion class. Experiments performed on three data sets demonstrate the effectiveness of our method when applied to each emotion class in comparison to previous approaches. Finally, we present analyses that illustrate the benefits of our method in terms of improving the prediction scores as well as producing discriminative features.
Highlights
T HE growing interest in Textual Emotion Recognition (TER) has been motivated by the proliferation of social media and online data, which have made it possible for people to communicate and share opinions on a variety of topics
We evaluate the ability of our method to distinguish between intra- and inter-class variations with respect to each emotion
We briefly describe the methods that we have compared, including methods that learn a joint loss function to improve the results of emotion classification and those that only use cross-entropy loss function (CEL)
Summary
T HE growing interest in Textual Emotion Recognition (TER) has been motivated by the proliferation of social media and online data, which have made it possible for people to communicate and share opinions on a variety of topics. The majority of previous research has focused on emotion classification as a single-label prediction problem by selecting the most dominant class for a given emotion expression. This approach makes use of cross-entropy loss, which attempts to maximise the probability of the correct class. S2 is annotated with “disgust”, while at the same time it could be possibly labelled with “anger”, due to the missing of explicit emotion-based keywords for the “disgust” emotion, as well as their similarities in linguistic expressions between the two emotions This linguistic overlap between different emotion classes can cause TER models to mislabel emotions and affect their performance in selecting the correct label. Mohammad and Bravo-Marquez [27] observe that negative emotions are highly associated with each other
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.