Named entity recognition is a fundamental task in natural language processing that significantly impacts the performance of its downstream tasks. Cross-task transfer learning methods are more naturally suited for low-resource named entity recognition compared to cross-language and cross-domain transfer learning methods. Existing cross-task transfer learning methods improve the performance of the low-resource named entity recognition by leveraging relevant information from other auxiliary tasks, such as sentence-level and token-level information. However, these methods do not fully exploit token-level information of entities, leaving room for improvement in low-resource named entity recognition. To futher improve the performance of the low-resource named entity recognition, this paper proposes a simple and effective cross-task transfer learning method called ECTTLNER, which introduces Sentence Contains Entities, Sentence Entity Number, Token Is Entity, and Token Boundary Label prediction tasks into named entity recognition and performs multi-task learning together with the main sequence labeling task. Experimental results on three NER datasets demonstrate that ECTTLNER outperforms a set of state-of-the-art baseline models, and achieves more than a 2.6% improvement in F1-score over these baseline models, particularly in low-resource scenarios.
Read full abstract