Abstract
ObjectiveTo develop a multi-scene model that can automatically segment acute vertebral compression fractures (VCFs) from spine radiographs.MethodsIn this multicenter study, we collected radiographs from five hospitals (Hospitals A–E) between November 2016 and October 2019. The study included participants with acute VCFs, as well as healthy controls. For the development of the Positioning and Focus Network (PFNet), we used a training dataset consisting of 1071 participants from Hospitals A and B. The validation dataset included 458 participants from Hospitals A and B, whereas external test datasets 1–3 included 301 participants from Hospital C, 223 from Hospital D, and 261 from Hospital E, respectively. We evaluated the segmentation performance of the PFNet model and compared it with previously described approaches. Additionally, we used qualitative comparison and gradient-weighted class activation mapping (Grad-CAM) to explain the feature learning and segmentation results of the PFNet model.ResultsThe PFNet model achieved accuracies of 99.93%, 98.53%, 99.21%, and 100% for the segmentation of acute VCFs in the validation dataset and external test datasets 1–3, respectively. The receiver operating characteristic curves comparing the four models across the validation and external test datasets consistently showed that the PFNet model outperformed other approaches, achieving the highest values for all measures. The qualitative comparison and Grad-CAM provided an intuitive view of the interpretability and effectiveness of our PFNet model.ConclusionIn this study, we successfully developed a multi-scene model based on spine radiographs for precise preoperative and intraoperative segmentation of acute VCFs.Critical relevance statementOur PFNet model demonstrated high accuracy in multi-scene segmentation in clinical settings, making it a significant advancement in this field.Key PointsThis study developed the first multi-scene deep learning model capable of segmenting acute VCFs from spine radiographs.The model’s architecture consists of two crucial modules: an attention-guided module and a supervised decoding module.The exceptional generalization and consistently superior performance of our model were validated using multicenter external test datasets.Graphical
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.