Abstract

Current state-of-the-art natural language processing (NLP) techniques use transformer deep learning architectures, which depend on large training datasets. We hypothesized that traditional NLP techniques may outperform transformers for smaller radiology report datasets. We compared the performance of BioBERT, a deep learning-based transformer model pre-trained on biomedical text, and three traditional machine learning models (gradient boosted tree, random forest, and logistic regression) on seven classification tasks given free-text radiology reports. Tasks included detection of appendicitis, diverticulitis, bowel obstruction, and enteritis/colitis on abdomen/pelvis CT reports, ischemic infarct on brain CT/MRI reports, and medial and lateral meniscus tears on knee MRI reports (7,204 total annotated reports). The performance of NLP models on held-out test sets were compared after training using the full training set, and 2.5%, 10%, 25%, 50%, and 75% random subsets of the training data. In all tested classification tasks, BioBERT performed poorly at smaller training sample sizes compared to non-deep learning NLP models. Specifically, BioBERT required training on approximately 1,000 reports to perform similarly or better than non-deep learning models. At around 1,250 to 1,500 training samples, the testing performance for all models began to plateau, where additional training data yielded minimal performance gain. With larger sample sizes, transformer NLP models achieved superior performance in radiology report binary classification tasks. However, with smaller sizes (<1000) and more imbalanced training data, traditional NLP techniques performed better. Our benchmarks can help guide clinical NLP researchers in selecting machine learning models according to their dataset characteristics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call