Abstract
The proposed framework addresses the problem of cross-lingual transfer learning resorting to Parallel Factor Analysis 2 (PARAFAC2). To avoid the need for multilingual parallel corpora , a pairwise setting is adopted where a PARAFAC2 model is fitted to documents written in English (source language) and a different target language. Firstly, an unsupervised PARAFAC2 model is fitted to parallel unlabelled corpora pairs to learn the latent relationship between the source and target language. The fitted model is used to create embeddings for a text classification task (document classification or authorship attribution). Subsequently, a logistic regression classifier is fitted to the training source language embeddings and tested on the training target language embeddings. Following the zero-shot setting, no labels are exploited for the target language documents. The proposed framework incorporates a self-learning process by utilizing the predicted labels as pseudo-labels to train a new, pseudo-supervised PARAFAC2 model, which aims to extract latent class-specific information while fusing language-specific information. Thorough evaluation is conducted on cross-lingual document classification and cross-lingual authorship attribution . Remarkably, the proposed framework achieves competitive results when compared to deep learning methods in cross-lingual transfer learning tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.