Abstract

Manual vetting of radiology referrals is an essential daily task to ensure the appropriateness of the received referrals. Such tasks require sufficient clinical experience and may challenge the radiology staff. With the emerging of artificial intelligence (AI) technology and advancement in natural language processing NLP, most of the available machine learning-based NLP models targeted research cohort building and healthcare quality. Other healthcare management tasks such as auto-vetting radiology referrals have not been adequately encoded. Furthermore, challenges, including class imbalance and lack of direct comparison with humans, are yet to be investigated sufficiently. In this study, a set of machine learning and deep learning models were developed for auto-vetting of lumbar spine magnetic resonance imaging LSMRI referrals as indicated or not indicated for scanning using referrals from two hospitals. The impact of applying one of the text augmentation techniques on the models' performance has been investigated. In addition, the performance of four different feature extraction techniques has been critically analyzed. Moreover, a comparison has been conducted between the developed models with two expert radiologists who were not involved in establishing the gold standard labels using an unseen dataset. The results show that the models’ performances significantly improved with the augmented data, with an increase in F1 scores ranging from 1% to 8%. Support vector machine with bag of words achieved the highest AUC reaching 0.99. Convolutional neural network model achieved the second-highest model with AUC = 0.97. All models outperformed the two expert radiologists when comparisons were conducted on the unseen dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call