Abstract

Extracting clinical terms from free-text format radiology reports is a first important step toward their secondary use. However, there is no general consensus on the kind of terms to be extracted. In this paper, we propose an information model comprising three types of clinical entities: observations, clinical findings, and modifiers. Furthermore, to determine its applicability for in-house radiology reports, we extracted clinical terms with state-of-the-art deep learning models and compared the results. We trained and evaluated models using 540 in-house chest computed tomography (CT) reports annotated by multiple medical experts. Two deep learning models were compared, and the effect of pre-training was explored. To investigate the generalizability of the model, we evaluated the use of other institutional chest CT reports. The micro F1-score of our best performance model using in-house and external datasets were 95.36% and 94.62%, respectively. Our results indicated that entities defined in our information model were suitable for extracting clinical terms from radiology reports, and the model was sufficiently generalizable to be used with dataset from other institutions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.