Abstract

Segmentation of lungs with acute respiratory distress syndrome (ARDS) is a challenging task due to diffuse opacification in dependent regions which results in little to no contrast at the lung boundary. For segmentation of severely injured lungs, local intensity and texture information, as well as global contextual information, are important factors for consistent inclusion of intrapulmonary structures. In this study, we propose a deep learning framework which uses a novel multi-resolution convolutional neural network (ConvNet) for automated segmentation of lungs in multiple mammalian species with injury models similar to ARDS. The multi-resolution model eliminates the need to tradeoff between high-resolution and global context by using a cascade of low-resolution to high-resolution networks. Transfer learning is used to accommodate the limited number of training datasets. The model was initially pre-trained on human CT images, and subsequently fine-tuned on canine, porcine, and ovine CT images with lung injuries similar to ARDS. The multi-resolution model was compared to both high-resolution and low-resolution networks alone. The multi-resolution model outperformed both the low- and high-resolution models, achieving an overall mean Jacaard index of 0.963 ± 0.025 compared to 0.919 ± 0.027 and 0.950 ± 0.036, respectively, for the animal dataset (N=287). The multi-resolution model achieves an overall average symmetric surface distance of 0.438 ± 0.315mm, compared to 0.971 ± 0.368mm and 0.657 ± 0.519mm for the low-resolution and high-resolution models, respectively. We conclude that the multi-resolution model produces accurate segmentations in severely injured lungs, which is attributed to the inclusion of both local and global features.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.