Abstract
Deep learning has been widely utilized for medical image segmentation. The most commonly used U-Net and its variants often share two common characteristics but lack solid evidence for the effectiveness. First, each block (i.e., consecutive convolutions of feature maps of the same resolution) outputs feature maps from the last convolution, limiting the variety of the receptive fields. Second, the network has a symmetric structure where the encoder and the decoder paths have similar numbers of channels. We explored two novel revisions: a stacked dilated operation that outputs feature maps from multi-scale receptive fields to replace the consecutive convolutions; an asymmetric architecture with fewer channels in the decoder path. Two novel models were developed: U-Net using the stacked dilated operation (SDU-Net) and asymmetric SDU-Net (ASDU-Net). We used both publicly available and private datasets to assess the efficacy of the proposed models. Extensive experiments confirmed SDU-Net outperformed or achieved performance similar to the state-of-the-art while using fewer parameters (40% of U-Net). ASDU-Net further reduced the model parameters to 20% of U-Net with performance comparable to SDU-Net. In conclusion, the stacked dilated operation and the asymmetric structure are promising for improving the performance of U-Net and its variants.
Accepted Version
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have