Abstract
Clinical named entity recognition (CNER) is a fundamental step for many clinical Natural Language Processing (NLP) systems, which aims to recognize and classify clinical entities such as diseases, symptoms, exams, body parts and treatments in clinical free texts. In recent years, with the development of deep learning technology, deep neural networks (DNNs) have been widely used in Chinese clinical named entity recognition and many other clinical NLP tasks. However, these state-of-the-art models failed to make full use of the global information and multi-level semantic features in clinical texts. We design an improved character-level representation approach which integrates the character embedding and the character-label embedding to enhance the specificity and diversity of feature representations. Then, a multi-head self-attention based Bi-directional Long Short-Term Memory Conditional Random Field (MUSA-BiLSTM-CRF) model is proposed. By introducing the multi-head self-attention and combining a medical dictionary, the model can more effectively capture the weight relationships between characters and multi-level semantic feature information, which is expected to greatly improve the performance of Chinese clinical named entity recognition. We evaluate our model on two CCKS challenge (CCKS2017 Task 2 and CCKS2018 Task 1) benchmark datasets and the experimental results show that our proposed model achieves the best performance competing with the state-of-the-art DNN based methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.