Abstract

The traditional named entity recognition model based on neural network uses static word vector, which can’t represent the ambiguity of the word in the context. The ERNIE-BiLSTM-CRF model is proposed. The ERNIE pre-training model can output different word vectors for different contexts by using multiple layers of Transformer, obtaining dynamic word vectors that contain overall sequence information. Secondly, the word vectors are input into the BiLSTM layer, which can obtain sentence context information through forward and backward LSTM and obtain more sentence features, thereby improving the model's effectiveness. Finally, the sequence is labeled through the CRF layer to obtain the globally optimal labeling information and complete the named entity recognition task. The experimental results show that compared with the traditional model, the F1 score of this model has significantly improved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call