Abstract

Name entity recognition (NER) is the foundation of a wide range of natural language processing (NLP) task in the domain of test identification. In this paper, we continue to train the pre-trained BERT model by unlabeled texts related to the domain of test identification, so as to inject domain knowledge into the pre-trained BERT model and realize the domain adaptation. The experiment results show that the proposed domain-adaptive pre-training method increases the F1 value by 1% compared with the baseline in the domain of test identification NER task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call