Abstract

Semantic segmentation of biomedical images found its niche in screening and diagnostic applications. Recent methods based on deep learning convolutional neural networks have been very effective, since they are readily adaptive to biomedical applications and outperform other competitive segmentation methods. Inspired by the U-Net, we designed a deep learning network with an innovative architecture, hereafter referred to as AID-U-Net. Our network consists of direct contracting and expansive paths, as well as a distinguishing feature of containing sub-contracting and sub-expansive paths. The implementation results on seven totally different databases of medical images demonstrated that our proposed network outperforms the state-of-the-art solutions with no specific pre-trained backbones for both 2D and 3D biomedical image segmentation tasks. Furthermore, we showed that AID-U-Net dramatically reduces time inference and computational complexity in terms of the number of learnable parameters. The results further show that the proposed AID-U-Net can segment different medical objects, achieving an improved 2D F1-score and 3D mean BF-score of 3.82% and 2.99%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call