Abstract

Generating Real World Evidence (RWE) on disease responses from radiological reports is important for understanding cancer treatment effectiveness and developing personalized treatment. A lack of standardization in reporting among radiologists impacts the feasibility of large-scale interpretation of disease response. This study examines the utility of applying natural language processing (NLP) to the large-scale interpretation of disease responses using a standardized oncologic response lexicon (OR-RADS) to facilitate RWE collection. Radiologists annotated 3503 retrospectively collected clinical impressions from radiological reports across several cancer types with one of seven OR-RADS categories. A Bidirectional Encoder Representations from Transformers (BERT) model was trained on this dataset with an 80-20% train/test split to perform multiclass and single-class classification tasks using the OR-RADS. Radiologists also performed the classification to compare human and model performance. The model achieved accuracies from 95 to 99% across all classification tasks, performing better in single-class tasks compared to the multiclass task and producing minimal misclassifications, which pertained mostly to overpredicting the equivocal and mixed OR-RADS labels. Human accuracy ranged from 74 to 93% across all classification tasks, performing better on single-class tasks. This study demonstrates the feasibility of the BERT NLP model in predicting disease response in cancer patients, exceeding human performance, and encourages the use of the standardized OR-RADS lexicon to improve large-scale prediction accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call