Abstract

Scoliosis diagnosis and assessment rely upon Cobb angle estimation from X-ray images of the spine. Recently, automated scoliosis assessment has been greatly improved using deep learning methods. However, in such methods, the Cobb angle is usually predicted based on regression models that don't account for information of the spine structure. Alternatively, the Cobb angle can be estimated indirectly through landmark-detection and vertebra-segmentation, but this approach is still highly sensitive to small detection and segmentation errors. This paper proposes a novel deep-learning architecture, called the vertebra localization and tilt estimation network (VLTENet). This network boosts the Cobb angle estimation accuracy through employing vertebra localization and tilt estimation as network prediction goals. In particular, the VLTENet model innovatively combines a deep high-resolution network (HRNet) and a fully-convolutional U-Net architecture for capturing long-range contextual information, the overall structure, and local details in spinal X-ray images. A feature fusion channel attention (FFCA) module is also proposed to selectively emphasize more informative features and suppress less informative ones. In addition, a joint spine loss function (JS-Loss) is designed to account for the spine shape and other spatial constraints, so that the network focuses more on spine-related regions and ignore irrelevant background regions. Finally, we propose a new Cobb angle estimation method conforms with the clinical Cobb angle calculation guidelines, and produces accurate estimates for different types of scoliosis. Extensive experiments on the publically-available AASCE challenge dataset and on an in-house dataset demonstrated the superiority of our method for the task of automatic assessment of scoliosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call