Abstract

The semantic segmentation of 3D medical image stacks enables accurate volumetric reconstructions, computer-aided diagnostics and follow-up treatment planning. In this work, we present a novel variant of the Unet model, called the NUMSnet, that transmits pixel neighborhood features across scans through nested layers to achieve accurate multi-class semantic segmentation with minimal training data. We analyzed the semantic segmentation performance of the NUMSnet model in comparison with several Unet model variants in the segmentation of 3–7 regions of interest using only 5–10% of images for training per Lung-CT and Heart-CT volumetric image stack. The proposed NUMSnet model achieves up to 20% improvement in segmentation recall, with 2–9% improvement in Dice scores for Lung-CT stacks and 2.5–16% improvement in Dice scores for Heart-CT stacks when compared to the Unet++ model. The NUMSnet model needs to be trained with ordered images around the central scan of each volumetric stack. The propagation of image feature information from the six nested layers of the Unet++ model are found to have better computation and segmentation performance than the propagation of fewer hidden layers or all ten up-sampling layers in a Unet++ model. The NUMSnet model achieves comparable segmentation performance to previous works while being trained on as few as 5–10% of the images from 3D stacks. In addition, transfer learning allows faster convergence of the NUMSnet model for multi-class semantic segmentation from pathology in Lung-CT images to cardiac segmentation in Heart-CT stacks. Thus, the proposed model can standardize multi-class semantic segmentation for a variety of volumetric image stacks with a minimal training dataset. This can significantly reduce the cost, time and inter-observer variability associated with computer-aided detection and treatment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.