Abstract

The study aims to enhance the accuracy and practicability of CT image segmentation and volume measurement of ICH by using deep learning technology. A dataset including the brain CT images and clinical data of 1,027 patients with spontaneous ICHs treated from January 2010 to December 2020 were retrospectively analyzed, and a deep segmentation network (AttFocusNet) integrating the focus structure and the attention gate (AG) mechanism is proposed to enable automatic, accurate CT image segmentation and volume measurement of ICHs. In internal validation set, experimental results showed that AttFocusNet achieved a Dice coefficient of 0.908, an intersection-over-union (IoU) of 0.874, a sensitivity of 0.913, a positive predictive value (PPV) of 0.957, and a 95% Hausdorff distance (HD95) (mm) of 5.960. The intraclass correlation coefficient (ICC) of the ICH volume measurement between AttFocusNet and the ground truth was 0.997. The average time of per case achieved by AttFocusNet, Coniglobus formula and manual segmentation is 5.6, 47.7, and 170.1 s. In the two external validation sets, AttFocusNet achieved a Dice coefficient of 0.889 and 0.911, respectively, an IoU of 0.800 and 0.836, respectively, a sensitivity of 0.817 and 0.849, respectively, a PPV of 0.976 and 0.981, respectively, and a HD95 of 5.331 and 4.220, respectively. The ICC of the ICH volume measurement between AttFocusNet and the ground truth were 0.939 and 0.956, respectively. The proposed segmentation network AttFocusNet significantly outperforms the Coniglobus formula in terms of ICH segmentation and volume measurement by acquiring measurement results closer to the true ICH volume and significantly reducing the clinical workload.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.