Abstract

To ensure the safety of spinal surgery, sufficiently labeled Magnetic Resonance Imaging (MRI) images are essential for training an accurate vertebral segmentation model, but the number of labeled MRI images owned by the independent medical institution such as the hospital is generally limited. Besides, in consideration of patients’ privacy, annotated images are difficult to share directly as the medical data to train vertebral body segment models. To address these challenges, a Federated Learning-based Vertebral Body Segment Framework (FLVBSF) is proposed in this work, which includes a novel local Dual Attention Gates (DAGs)-based attention mechanism and a global federated learning framework. The model sensitivity to vertebral body pixels and segmentation accuracy can be improved by using the DAGs. The performance of vertebral body segmentation models is boosted by the global federated learning framework via collaboratively exploiting the labeled spine image data from different institutions. The centralized training-based experimental results show that 98.29% in pixel-level accuracy is achieved by the U-Net with DAGs, 88.04% in dice similarity coefficient, 88.25% in sensitivity, 99.16% in specificity, and 79.09% in Jaccard similarity coefficient and the mean segmentation time per case is 0.14 s. Meanwhile, the federated learning-based experimental results show that the proposed FLVBSF can enhance the performance of the vertebral segmentation model by a statistically significant margin.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call