Abstract
Computational hemodynamics is increasingly being used to quantify hemodynamic characteristics in and around abdominal aortic aneurysms (AAA) in a patient-specific fashion. However, the time-consuming manual annotation hinders the clinical translation of computational hemodynamic analysis. Thus, we investigate the feasibility of using deep-learning-based image segmentation methods to reduce the time required for manual segmentation. Two of the latest deep-learning-based image segmentation methods, ARU-Net and CACU-Net, were used to test the feasibility of automated computer model creation for computational hemodynamic analysis. Morphological features and hemodynamic metrics of 30 computed tomography angiography (CTA) scans were compared between pre-dictions and manual models. The DICE score for both networks was 0.916, and the correlation value was above 0.95, indicating their ability to generate models comparable to human segmentation. The Bland-Altman analysis shows a good agreement between deep learning and manual segmentation results. Compared with manual (computational hemodynamics) model recreation, the time for automated computer model generation was significantly reduced (from ∼2 h to ∼10 min). Automated image segmentation can significantly reduce time expenses on the recreation of patient-specific AAA models. Moreover, our study showed that both CACU-Net and ARU-Net could accomplish AAA segmentation, and CACU-Net outperformed ARU-Net in terms of accuracy and time-saving.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.