Abstract

ObjectivesTo evaluate the performance of a deep convolutional neural network (DCNN) in detecting and classifying distal radius fractures, metal, and cast on radiographs using labels based on radiology reports. The secondary aim was to evaluate the effect of the training set size on the algorithm’s performance.MethodsA total of 15,775 frontal and lateral radiographs, corresponding radiology reports, and a ResNet18 DCNN were used. Fracture detection and classification models were developed per view and merged. Incrementally sized subsets served to evaluate effects of the training set size. Two musculoskeletal radiologists set the standard of reference on radiographs (test set A). A subset (B) was rated by three radiology residents. For a per-study-based comparison with the radiology residents, the results of the best models were merged. Statistics used were ROC and AUC, Youden’s J statistic (J), and Spearman’s correlation coefficient (ρ).ResultsThe models’ AUC/J on (A) for metal and cast were 0.99/0.98 and 1.0/1.0. The models’ and residents’ AUC/J on (B) were similar on fracture (0.98/0.91; 0.98/0.92) and multiple fragments (0.85/0.58; 0.91/0.70). Training set size and AUC correlated on metal (ρ = 0.740), cast (ρ = 0.722), fracture (frontal ρ = 0.947, lateral ρ = 0.946), multiple fragments (frontal ρ = 0.856), and fragment displacement (frontal ρ = 0.595).ConclusionsThe models trained on a DCNN with report-based labels to detect distal radius fractures on radiographs are suitable to aid as a secondary reading tool; models for fracture classification are not ready for clinical use. Bigger training sets lead to better models in all categories except joint affection.Key Points• Detection of metal and cast on radiographs is excellent using AI and labels extracted from radiology reports.• Automatic detection of distal radius fractures on radiographs is feasible and the performance approximates radiology residents.• Automatic classification of the type of distal radius fracture varies in accuracy and is inferior for joint involvement and fragment displacement.

Highlights

  • Study populationAcute distal radius fractures are common traumatic injuries and comprise 17% of all fractures in western societies [1]

  • A category of deep learning (DL) is known as deep convolutional neural networks (DCNN), which addresses the underlying architecture

  • This study evaluated the potential of a ResNet18 DCNN to develop models which detect cast, metal, and distal radius fractures on wrist radiographs, and classified fractures, utilizing labels based on radiology reports

Read more

Summary

Introduction

Study populationAcute distal radius fractures are common traumatic injuries and comprise 17% of all fractures in western societies [1]. Distal radius fractures can be diagnosed confidently on wrist radiographs [2]. DCNNs are well suited for pattern detection on images They have successfully been used for fracture detection and localization on radiographs [3,4,5,6,7,8,9,10,11,12]. Cheng et al [8] used registry data to label hip fractures on radiographs and only Olczak et al [12] used key phrases of radiology reports to label radiographs for the training set. Five studies have evaluated the automated detection of distal radius fractures on radiographs with promising sensitivities and specificities of 81– 98% and 73–100%, respectively [4,5,6, 12, 13]. The ideal number of radiographs to train and test an algorithm for peripheral fracture detection is unclear and studies utilized varying numbers ranging from 524 to 65,264 radiographs [12, 13]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call