Abstract

In an era where user-generated content becomes ever more prevalent, reliable methods to judge emotional properties of these kinds of complex texts are needed, for example for developing corpora in machine learning contexts. In this study, we focus on Dutch Twitter messages, a genre which is high in emotional content and frequently investigated in the field of computational linguistics. We compare three annotation methods to annotate the emotional dimensions valence, arousal and dominance in 300 Tweets, namely rating scales, pairwise comparison and best–worst scaling. We evaluate the annotation methods on the criterion of inter-annotator agreement, based on judgments of 18 annotators in total. On this dataset, best–worst scaling has the highest inter-annotator agreement. We find that the difference in agreement is largest for dominance and smallest for valence, suggesting that the benefit of best–worst scaling becomes more pronounced as the annotation task gets more difficult. However, we also find that best–worst scaling is particularly more time-consuming than are rating scale and pairwise comparison annotations. This leads us to conclude that, in particular when dealing with computational models, a comparative assessment of quality versus costs needs to be made.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.