Abstract

Brain extraction is an essential pre-processing step for neuroimaging analysis. It is difficult to achieve high-precision extraction from low-quality brain MRI images with artifacts and gray inconsistencies which often result in irregular hole regions in the extracted brain tissues. In addition, the U-Net based brain extraction method trends to output over-smoothed brain boundary. To remove those irregular holes in the extracted mask, we proposed a new U-Net based model for brain extraction named O-Net. O-Net replaces the skip-connection path in the U-Net with dual shortcut paths including an attention module of an O-shaped network, which uses deep semantic information to highlight the target area while retaining more image details. O-Net effectively reduces the impact of intensity differences caused by artifacts or gray inconsistencies in the brain MRI images on the extraction results. To more accurately identify brain boundary, we designed a new GAN based brain extraction method, which used above O-Net as the segmentation network. The discrimination network of the proposed GAN model adopts the residual structure to enhance the nonlinear expression ability of the network to balance the adversarial training of the two networks. To speed up the convergence of the proposed model, a segmentation loss was added to the adversarial loss to supervise the feature learning of the segmentation network. This method was compared with other popular brain extraction methods on two public datasets (IBSR18 and LPBA40). The mean Dice similarity coefficients obtained by the proposed method were 97.26% and 98.29% on IBSR18 and LPBA40 respectively. In the comparative experiment, the results of the proposed method are the best on the two public datasets. Experimental results show that the proposed model can stably output high-precision brain tissue extraction images and the influence of artifacts and gray inconsistencies is small.

Highlights

  • As magnetic resonance imaging (MRI) equipment has been widely used in clinical medical applications, neuroimaging analysis becomes more and more powerful in the fields of brain disease diagnosis and brain function analysis

  • EXPERIMENTS We evaluated the performance of the proposed model (WGAN+O-Net) through several comparative experiments with some popular brain extraction algorithms

  • ROBEX performed normally on LPBA40 without major misrecognition, while it was affected by artifacts and inconsistent gray-scale distribution, and its extraction results retained large-scale skulls on IBSR18

Read more

Summary

INTRODUCTION

As magnetic resonance imaging (MRI) equipment has been widely used in clinical medical applications, neuroimaging analysis becomes more and more powerful in the fields of brain disease diagnosis and brain function analysis. Since the development of the brain extraction research, a lot of automatic methods have been proposed These methods can be divided into the classic-based [1,2,3,4,5,6,7], the atlas-based [8,9], and the learning-based [10,11,12,13,14,15,16,17,18,19,20]. Based on the above observation and analysis, this paper proposes a new brain extraction model WGAN+O-Net, where WGAN (Wasserstein GAN) [23] stably carries out adversarial training to promote the accuracy of our proposed segmentation network O-Net. O-Net introduces attention modules into U-Net to form a new shortcut connection path between the corresponding feature maps on encoding and decoding paths. In the remainder of this paper, we first review the related work of the proposed brain extraction method, give a detailed description of proposed method, and verify the performance of the model through experiments on healthy brain MRI scan and pathological brain MRI scans

RELATED WORK
WGAN-DIV
A Attention block
Adversarial training
EXPERIMENTS
Findings
DISCUSSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.