Abstract

Moral expressions in online communications can have a serious impact on framing discussions and subsequent online behaviours. Despite research on extracting moral sentiment from English text, other low-resource languages, such as Persian, lack enough resources and research about this important topic. We address this issue using the Moral Foundation theory (MFT) as the theoretical moral psychology paradigm. We developed a Twitter data set of 8000 tweets that are manually annotated for moral foundations and also we established a baseline for computing moral sentiment from Persian text. We evaluate a plethora of state-of-the-art machine learning models, both rule-based and neural, including distributed dictionary representation (DDR), long short-term memory (LSTM) and bidirectional encoder representations from transformer (BERT). Our findings show that among different models, fine-tuning a pre-trained Persian BERT language model with a linear network as the classifier yields the best results. Furthermore, we analysed this model to find out which layer of the model contributes most to this superior accuracy. We also proposed an alternative transformer-based model that yields competitive results to the BERT model despite its lower size and faster inference time. The proposed model can be used as a tool for analysing moral sentiment and framing in Persian texts for downstream social and psychological studies. We also hope our work provides some resources for further enhancing the methods for computing moral sentiment in Persian text.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call