Abstract

With the continuous improvement and development of question answering systems, combining them with knowledge graphs in relevant professional fields can relatively improve their practicality. Most existing knowledge graph models are general knowledge graphs with wide coverage, but their quality is poor, leading to issues such as loose data and low coverage, resulting in low coverage of professional knowledge matching in the Q&A process and often unsatisfactory answers. To address this drawback, this article proposes a named entity recognition method based on the BERT-BiLSTM-CRF model. This method can fully combine BERT’s context aware embedding, BiLSTM’s sequence modeling ability, and CRF’s label dependency modeling to improve the accuracy of identifying and annotating entities in text. Comparative experiments with different models have shown that the BERT-BiLSTM-CRF model outperforms the BERT-BiLSTM and BERT-CRF models in terms of accuracy. Therefore, the combination of knowledge graph and BERT-BiLSTM-CRF model applied to question answering systems can greatly improve the accuracy of knowledge extraction, make question answering scenarios more anthropomorphic, and reduce the probability of non answering situations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call