Abstract

Background:Hand X-rays are ordered in outpatient, inpatient, and emergency settings, the results of which are often initially interpreted by non-radiology trained health care providers. There may be utility in automating upper extremity X-ray analysis to aid with rapid initial analysis. Deep neural networks have been effective in several medical imaging analysis applications. The purpose of this work was to apply a deep learning framework to automatically classify the radiographic positioning of hand X-rays.Methods:A 152-layer deep neural network was trained using the musculoskeletal radiographs data set. This data set contains 6003 hand X-rays. The data set was filtered to remove pediatric X-rays and atypical views. The X-rays were all labeled as either posteroanterior (PA), lateral, or oblique views. A subset of images was set aside for model validation and testing. Data set augmentation was performed, including horizontal and vertical flips, rotations, as well as modifications in image brightness and contrast. The model was evaluated, and performance was reported as a confusion matrix from which accuracy, precision, sensitivity, and specificity were calculated.Results:The augmented training data set consisted of 80 672 images. Their distribution was 38% PA, 35% lateral, and 27% oblique projections. When evaluated on the test data set, the model performed with overall 96.0% accuracy, 93.6% precision, 93.6% sensitivity, and 97.1% specificity.Conclusions:Radiographic positioning of hand X-rays can be effectively classified by a deep neural network. Further work will be performed on localization of abnormalities, automated assessment of standard radiographic measures and eventually on computer-aided diagnosis and management guidance of skeletal pathology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call