Abstract

Rotator cuff tears (RCTs) are one of the most common shoulder injuries, which are typically diagnosed using relatively expensive and time-consuming diagnostic imaging tests such as magnetic resonance imaging or computed tomography. Deep learning algorithms are increasingly used to analyze medical images, but they have not been used to identify RCTs with ultrasound images. The aim of this study is to develop an approach to automatically classify RCTs and provide visualization of tear location using ultrasound images and convolutional neural networks (CNNs). The proposed method was developed using transfer learning and fine-tuning with five pre-trained deep models (VGG19, InceptionV3, Xception, ResNet50, and DenseNet121). The Bayesian optimization method was also used to optimize hyperparameters of the CNN models. A total of 194 ultrasound images from Kosin University Gospel Hospital were used to train and test the CNN models by five-fold cross-validation. Among the five models, DenseNet121 demonstrated the best classification performance with 88.2% accuracy, 93.8% sensitivity, 83.6% specificity, and AUC score of 0.832. A gradient-weighted class activation mapping (Grad-CAM) highlighted the sensitive features in the learning process on ultrasound images. The proposed approach demonstrates the feasibility of using deep learning and ultrasound images to assist RCTs' diagnosis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.