Abstract

Background and objectiveDue to the lack of training data, the accurate classification of skin lesions still has great challenges. Generative adversarial networks (GANs) have been used to synthesize dermoscopy images successfully. Unfortunately, previous methods usually directly feed category labels into GANs and cannot provide effective information gain for the classification model. This paper studies a specific conditional image synthesis method, which could convert semantic segmentation map to dermoscopy image. MethodsWe proposed a conditional GANs (CGAN) for high-resolution dermoscopy images synthesis. First, we established an effective label mapping with pathological significance by combining the segmentation mask and category label of skin lesions. Then, a CGAN based on the image-to-image translation framework is constructed and took the previous label mapping as input to generate dermoscopy images. Especially, the shallow and deep features are combined together in the generator to avoid the loss of semantic information, and discriminator-based feature matching loss is introduced to improve the quality of generated images. ResultsThe proposed method is evaluated in ISIC-2017 skin dataset. Compared with several representative GANs architectures including the newest semantic image synthesis method, the proposed method has better performance in both visual effect and quantitative evaluation. Moreover, by using the generated images, the average AUC values of several skin lesion classification models can be improved effectively. ConclusionsThe proposed method can generate high-realistic and high-resolution demoscopy images, leading to performance improvement of skin lesion classification models, which could also be helpful for solving data shortages and classes imbalance problems in the field of medical image analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call