Abstract

Rapid increase in adoption of electronic health records in health care institutions has motivated the use of entity extraction tools to extract meaningful information from clinical notes with unstructured and narrative style. This paper investigates the performance of two such tools in automatic entity extraction. In specific, this work focuses on automatic medication extraction performance of Amazon Comprehend Medical (ACM) and Clinical Language Annotation, Modeling and Processing (CLAMP) toolkit using 2014 i2b2 NLP challenge dataset and its annotated medical entities. Recall, precision and F-score are used to evaluate the performance of the tools.Clinical Relevance- Majority of data in electronic health records (EHRs) are in the form of free text that features a gold mine of patient's information. While computerized applications in healthcare institutions as well as clinical research leverage structured data. As a result, information hidden in clinical free texts needs to be extracted and formatted as a structured data. This paper evaluates the performance of ACM and CLAMP in automatic entity extraction. The evaluation results show that CLAMP achieves an F-score of 91%, in comparison to an 87% F-score by ACM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call