Abstract

Significant part of the data in electronic health records (EHR) is in the form of free text containing valuable information about clinical events. This information needs to be extracted in order to enable further analysis and utilization in daily healthcare setting as well as in research. Clinical named entity recognition is an important natural language processing (NLP) task that is critical for extracting important concepts (named entities) from clinical narratives and encoding them. The aim of this paper was comparison of an automatic entity recognition performance of Amazon Comprehend Medical (ACM), Clinical Language Annotation, Modeling and Processing (CLAMP) toolkit and Spark NLP on a standardized validated dataset and its annotated entities. Recall, precision and F1-score were used to evaluate accuracy and performance of the tools. Spark NLP outperformed ACM and CLAMP in terms of average recall, however CLAMP showed better performance in terms of average precision. Depending on the application, CLAMP and Spark NLP can be used in real world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call