Abstract

The number of applications in sentiment analysis is growing daily, and research in this field is increasing. Despite the rapid growth of data sources in English, low-resource languages suffer from a lack of data for accurate training models. Moreover, users cannot trust such systems without explaining the output. In this study, we propose a cross-lingual deep neural model to improve the accuracy of sentiment analysis for low-resource languages while providing an explainable description of the predictions. The proposed model contains a word representation model where we use XLM-RoBERTa, a pre-trained contextualized transformer-based cross-lingual language model, and a long short-term memory network together with an attention mechanism that helps improve the explainability of the model and detect the informative words that impact text polarity. Our experiments show the superiority of the proposed model compared to the state-of-the-art mono-lingual techniques and cross-lingual models. The results show 0.55% improvement compared to the cross-lingual sentiment analysis proposed by Ghasemi et al. and 15.08% improvement compared to the mono-lingual contextualized sentiment analysis. Moreover, we achieve 0.54% further improvement when using attention mechanisms for enhancing the model with explainability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call