Abstract

Several studies have shown the excellent performance of deep learning in image segmentation. Usually, this benefits from a large amount of annotated data. Medical image segmentation is challenging, however, since there is always a scarcity of annotated data. This study constructs a novel deep network for medical image segmentation, referred to as asymmetric U-Net generative adversarial networks with multi-discriminators (AU-MultiGAN). Specifically, the asymmetric U-Net is designed to produce multiple segmentation maps simultaneously and use the dual-dilated blocks in the feature extraction stage only. Further, the multi-discriminator module is embedded into the asymmetric U-Net structure, which can capture the available information of samples sufficiently and thereby promote the information transmission of features. A hybrid loss by the combination of segmentation and discriminator losses is developed, and an adaptive method of selecting the scale factors is devised for this new loss. More importantly, the convergence of the proposed model is proved mathematically. The proposed AU-MultiGAN approach is implemented on some standard medical image benchmarks. Experimental results show that the proposed architecture can be successfully applied to medical image segmentation, and obtain superior performance in comparison with the state-of-the-art baselines.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.