Abstract

The nnUNet is a state-of-the-art deep learning based segmentation framework which automatically and systematically configures the entire network training pipeline. We extend the network architecture component of the nnUNet framework by newly integrating mechanisms from advanced U-Net variations including residual, dense, and inception blocks as well as three forms of the attention mechanism. We propose the following extensions to nnUNet, namely Residual-nnUNet, Dense-nnUNet, Inception-nnUNet, Spatial-Single-Attention-nnUNet, Spatial- Multi-Attention-nnUNet, and Channel-Spatial-Attention-nnUNet. Furthermore, within Channel-Spatial- Attention-nnUNet we integrate our newly proposed variation of the channel-attention mechanism. We demonstrate that use of the nnUNet allows for consistent and transparent comparison of U-Net architectural modifications, while maintaining network architecture as the sole independent variable across experiments with respect to a dataset. The proposed variants are evaluated on eight medical imaging datasets consisting of 20 anatomical regions which is the largest collection of datasets on which attention U-Net variants have been compared in a single work. Our results suggest that attention variants are effective at improving performance when applied to tumour segmentation tasks consisting of two or more target anatomical regions, and that segmentation performance is influenced by use of the deep supervision architectural feature.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.