Abstract
To better tackle the named entity recognition (NER) problem on languages with little/no labeled data, cross-lingual NER must effectively leverage knowledge learned from source languages with rich labeled data. Previous works on cross-lingual NER are mostly based on label projection with pairwise texts or direct model transfer. However, such methods either are not applicable if the labeled data in the source languages is unavailable, or do not leverage information contained in unlabeled data in the target language. In this paper, we propose a teacher-student learning method to address such limitations, where NER models in the source languages are used as teachers to train a student model on unlabeled data in the target language. The proposed method works for both single-source and multi-source cross-lingual NER. For the latter, we further propose a similarity measuring method to better weight the supervision from different teacher models. Extensive experiments for 3 target languages on benchmark datasets well demonstrate that our method outperforms existing state-of-the-art methods for both single-source and multi-source cross-lingual NER.
Highlights
Named entity recognition (NER) is the task of identifying text spans that belong to pre-defined categories, like locations, person names, etc
The proposed method does not rely on labelled data in the source language, and it leverages the available information from unlabeled data in the target language, avoiding the mentioned limitations of previous works
We propose a teacher-student learning method for single-/multi-source cross-lingual NER, via using source-language models as teachers to train a student model on unlabeled data in the target language
Summary
Named entity recognition (NER) is the task of identifying text spans that belong to pre-defined categories, like locations, person names, etc. The proposed method does not rely on labelled data in the source language, and it leverages the available information from unlabeled data in the target language, avoiding the mentioned limitations of previous works. We further extend our teacher-student learning method to multi-source cross-lingual NER, considering that there are usually multiple source languages available in practice and we would prefer transferring knowledge from all source languages rather than a single one In this case, our method still enjoys the same advantages in terms of data availability and inference efficiency, compared with existing works (Tackstrom, 2012; Chen et al, 2019; Enghoff et al, 2018; Rahimi et al, 2019). We propose a teacher-student learning method for single-source cross-lingual NER, which addresses limitations of previous works w.r.t data availability and usage of unlabeled data. We conduct extensive experiments validating the effectiveness and reasonableness of the proposed methods, and further analyse why they attain superior performance
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.