Abstract
Medical image segmentation is a basal and essential task for computer-aided diagnosis and quantification of diseases. However, robust and precise medical image segmentation is still a challenging task on account of much factors, such as complex backgrounds, overlapping structures, high variation of appearances and low contrast. Recently, with the strong support of deep convolutional neural networks (DCNNs), the encoder-decoder based segmentation networks have been the popular detection schemes for medical image analysis, yet image segmentation based on DCNNs still faces some limitations, such as restricted receptive field, limited information flow, etc. To address such challenges, a novel dual-branch deep residual U-Net network is proposed in this paper for medical image detection which provides more avenues for information flow to gather both high-level and low-level feature maps and a greater depth of contextual data.A residual U-Net network is constructed for efficient feature expression using residual learning, attention block, and feature expression. Meanwhile, fused with atrous spatial pyramid pooling (ASPP) block and squeeze-and-excitation (SE) block, The residual U-Net network is suggested to embed an attention fusion block to gather multi-scale contextual data. On the basis, To fully utilize local contextual data and increase segmentation precision, a dual-branch deep residual U-Net network is built by stacking two residual U-Net networks. Combined with multiple public benchmark data sets on medical images, including the CVC-ClinicDB, the GIAS set and LUNA16 set, experimental results indicate the superior ability of proposed segmentation network on medical image segmentation compared with other advanced segmentation models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.