Abstract

This paper presents an adversarial learning-based approach to synthesize medical images for medical image tissue recognition. The performance of medical image recognition models highly depends on the representativeness and sufficiency of training samples. The high expense of collecting large amounts of practical medical images leads to a demand of synthesizing image samples. In this research, generative adversarial networks (GANs), which consist of a generative network and a discriminative network, are applied to develop a medical image synthesis model. Specifically, deep convolutional GANs (DCGANs), Wasserstein GANs (WGANs), and boundary equilibrium GANs (BEGANs) are implemented and compared to synthesize medical images in this research. Convolutional neural networks (CNNs) are applied in the GAN models, which can capture feature representations that describe a high level of image semantic information. Then synthetic images are generated by employing the generative network mapping from random noise. The effectiveness of the generative network is validated by a discriminative network, which is trained to detect the synthetic images from real images. Through a minimax two-player game, the generative and discriminative networks can train each other. The generated synthetic images are used to train a CNN classification model for tissue recognition. Through the experiments with the synthetic images, the tissue recognition accuracy achieves 98.83%, which reveals the effectiveness and applicability of synthesizing medical images through the GAN models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call