Abstract

Named entity recognition, aiming at identifying and classifying named entity mentioned in the structured or unstructured text, is a fundamental subtask for information extraction in natural language processing (NLP). With the development of electronic medical records, obtaining the key and effective information in electronic document through named entity identification has become an increasingly popular research direction. In this article, we adapt a recently introduced pre-trained language model BERT for named entity recognition in electronic medical records to solve the problem of missing context information and we add an extra mechanism to capture the relationship between words. Based on this, (1) the entities can be represented by sentence-level vector, with the forward as well as backward information of the sentence, which can be directly used by downstream tasks; (2) the model acquires the representation of word in context and learn the potential relation between words to decrease the influence of inconsistent entity markup problem of a text. We conduct experiments an electronic medical record dataset proposed by China Conference on Knowledge Graph and Semantic Computing in 2019. The experimental result shows that our proposed method has an improvement compared with the traditional methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.