Abstract
Cross-lingual sentiment analysis aims at tackling the lack of annotated corpus of variant low-resource languages by training a common classifier, to transfer the knowledge learned from the source language to target languages. Existing large-scale pre-trained language models have got remarkable improvements in cross-lingual sentiment analysis. However, these models still suffer from lack of annotated corpus for low-resource languages. To address such problems, we propose an end-to-end sentiment analysis architecture for cross-lingual sentiment analysis, named Distillation Language Adversarial Network (DLAN). Based on pre-trained model, DLAN uses adversarial learning with knowledge distillation to learn language invariant features without extra training data. We evaluate the proposed method on Amazon review dataset, a multilingual sentiment dataset. The results illustrate that DLAN is more effective than the baseline methods in cross-lingual sentiment analysis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.