Abstract

Radiological degenerative phenotypes provide insight into a patient's overall extent of disease and can be predictive for future pathological developments as well as surgical outcomes and complications. The objective of this study was to develop a reliable method for automatically classifying sagittal MRI image stacks of cervical spinal segments with respect to these degenerative phenotypes. We manually evaluated sagittal image data of the cervical spine of 873 patients (5182 motion segments) with respect to 5 radiological phenotypes. We then used this data set as ground truth for training a range of multi-class multi-label deep learning-based models to classify each motion segment automatically, on which we then performed hyper-parameter optimization. The ground truth evaluations turned out to be relatively balanced for the labels disc displacement posterior, osteophyte anterior superior, osteophyte posterior superior, and osteophyte posterior inferior. Although we could not identify a single model that worked equally well across all the labels, the 3D-convolutional approach turned out to be preferable for classifying all labels. Class imbalance in the training data and label noise made it difficult to achieve high predictive power for underrepresented classes. This shortcoming will be mitigated in the future versions by extending the training data set accordingly. Nevertheless, the classification performance rivals and in some cases surpasses that of human raters, while speeding up the evaluation process to only require a few seconds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call