Abstract

The ability to efficiently and reproducibly generate subject-specific 3D models of bone and soft tissue is important to many areas of musculoskeletal research. However, methodologies requiring such models have largely been limited by lengthy manual segmentation times. Recently, machine learning, and more specifically, convolutional neural networks, have shown potential to alleviate this bottleneck in research throughput. Thus, the purpose of this work was to develop a modified version of the convolutional neural network architecture U-Net to automate segmentation of the tibia and femur from double echo steady state knee magnetic resonance (MR) images. Our model was trained on a dataset of over 4,000 MR images from 34 subjects, segmented by three experienced researchers, and reviewed by a musculoskeletal radiologist. For our validation and testing sets, we achieved dice coefficients of 0.985 and 0.984, respectively. As further testing, we applied our trained model to a prior study of tibial cartilage strain and recovery. In this analysis, across all subjects, there were no statistically significant differences in cartilage strain between the machine learning and ground truth bone models, with a mean difference of 0.2 ± 0.7 % (mean ± 95 % confidence interval). This difference is within the measurement resolution of previous cartilage strain studies from our lab using manual segmentation. In summary, we successfully trained, validated, and tested a machine learning model capable of segmenting MR images of the knee, achieving results that are comparable to trained human segmenters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.