Abstract

Intrathoracic airway segmentation from computed tomography images is a frequent prerequisite for further quantitative lung analyses. Due to low contrast and noise, especially at peripheral branches, it is often challenging for automatic methods to strike a balance between extracting deeper airway branches and avoiding leakage to the surrounding parenchyma. Meanwhile, manual annotations are extremely time consuming for the airway tree, which inhibits automated methods requiring training data. To address this, we introduce a 3D deep learning-based workflow able to produce high-quality airway segmentation from incompletely labeled training data generated without manual intervention. We first train a 3D fully convolutional network (FCN) based on the fact that 3D spatial information is crucial for small highly anisotropic tubular structures such as airways. For training the 3D FCN, we develop a domain-specific sampling scheme that strategically uses incomplete labels from a previous highly specific segmentation method, aiming to retain similar specificity while boosting sensitivity. Finally, to address local discontinuities of the coarse 3D FCN output, we apply a graph-based refinement incorporating fuzzy connectedness segmentation and robust curve skeletonization. Evaluations on the EXACT’09 and LTRC datasets demonstrate considerable improvements in airway extraction while maintaining reasonable leakage compared with a state-of-art method and the dataset reference standard.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.