Abstract

This study aimed to evaluate the performance of a deep-learning model to evaluate knee osteoarthritis using Kellgren-Lawrence grading in real-life knee radiographs. A deep convolutional neural network model was trained using 8964 knee radiographs from the osteoarthritis initiative (OAI), including 962 testing set images. Another 246 knee radiographs from the Far Eastern Memorial Hospital were used for external validation. The OAI testing set and external validation images were evaluated by experienced specialists, two orthopedic surgeons, and a musculoskeletal radiologist. The accuracy, interobserver agreement, F1 score, precision, recall, specificity, and ability to identify surgical candidates were used to compare the performances of the model and specialists. Attention maps illustrated the interpretability of the model classification. The model had a 78% accuracy and consistent interobserver agreement for the OAI (model-surgeon 1 К = 0.80, model-surgeon 2 К = 0.84, model-radiologist К = 0.86) and external validation (model-surgeon 1 К = 0.81, model-surgeon 2 К = 0.82, model-radiologist К = 0.83) images. A lower interobserver agreement was found in the images misclassified by the model (model-surgeon 1 К = 0.57, model-surgeon 2 К = 0.47, model-radiologist К = 0.65). The model performed better than specialists in identifying surgical candidates (Kellgren-Lawrence Stages 3 and 4) with an F1 score of 0.923. Our model not only had comparable results with specialists with respect to the ability to identify surgical candidates but also performed consistently with open database and real-life radiographs. We believe the controversy of the misclassified knee osteoarthritis images was based on a significantly lower interobserver agreement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call