Abstract

To develop a deep learning model to distinguish rheumatoid arthritis (RA) from osteoarthritis (OA) using hand radiographs and to evaluate the effects of changing pretraining and training parameters on model performance. A convolutional neural network was retrospectively trained on 9714 hand radiograph exams from 8387 patients obtained from 2017 to 2021 at seven hospitals within an integrated healthcare network. Performance was assessed using an independent test set of 250 exams from 146 patients. Binary discriminatory capacity (no arthritis versus arthritis; RA versus not RA) and three-way classification (no arthritis versus OA versus RA) were evaluated. The effects of additional pretraining using musculoskeletal radiographs, using all views as opposed to only the posteroanterior view, and varying image resolution on model performance were also investigated. Area under the receiver operating characteristic curve (AUC) and Cohen's kappa coefficient were used to evaluate diagnostic performance. For no arthritis versus arthritis, the model achieved an AUC of 0.975 (95% CI: 0.957, 0.989). For RA versus not RA, the model achieved an AUC of 0.955 (95% CI: 0.919, 0.983). For three-way classification, the model achieved a kappa of 0.806 (95% CI: 0.742, 0.866) and accuracy of 87.2% (95% CI: 83.2%, 91.2%) on the test set. Increasing image resolution increased performance up to 1024 × 1024 pixels. Additional pretraining on musculoskeletal radiographs and using all views did not significantly affect performance. A deep learning model can be used to distinguish no arthritis, OA, and RA on hand radiographs with high performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call