Abstract

Objective. This paper proposes a conditional GAN (cGAN)-based method to perform data enhancement of ultrasound images and segmentation of tumors in breast ultrasound images, which improves the reality of the enhenced breast ultrasound image and obtains a more accurate segmentation result. Approach. We use the idea of generative adversarial training to accomplish the following two tasks: (1) in this paper, we use generative adversarial networks to generate a batch of samples with labels from the perspective of label-generated images to expand the dataset from a data enhancement perspective. (2) In this paper, we use adversarial training instead of postprocessing steps such as conditional random fields to enhance higher-level spatial consistency. In addition, this work proposes a new network, EfficientUNet, based on U-Net, which combines ResNet18, an attention mechanism and a deep supervision technique. This segmentation model uses the residual network as an encoder to retain the lost information in the original encoder and can avoid the gradient disappearance problem to improve the feature extraction ability of the model, and it also uses deep supervision techniques to speed up the convergence of the model. The channel-by-channel weighting module of SENet is then used to enable the model to capture the tumor boundary more accurately. Main results. The paper concludes with experiments to verify the validity of these efforts by comparing them with mainstream methods on Dataset B. The Dice score and IoU score reaches 0.8856 and 0.8111, respectively. Significance. This study successfully combines cGAN and optimized EfficientUNet for the segmentation of breast tumor ultrasound images. The conditional generative adversarial network has a good performance in data enhancement, and the optimized EfficientUNet makes the segmentation more accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call