Abstract

The crux of medical image segmentation stems from learning pixel-wise semantic consistency from a mount of labeled samples through exhaustive annotation. Most existing methods rely heavily on labeled image pairs and are optimized in a fully supervised manner, often resulting in unsatisfactory results due to limited paired training samples. In this paper, we propose a GAN inversion-based semi-supervised learning framework InvSSL for medical image segmentation, which generates corresponding variants from labeled images by GAN inversion. Specifically, it first trains a StyleGAN as an inversion generator which is able to generate samples with the same distribution as training samples, and then obtains variant samples from a training sample by performing GAN inversion to strengthen the segmentation performance in a semi-supervised learning manner. In particular, inversion generator inverts a training sample into the latent space and adds interference to the latent codes to evolve variant samples. To model high-level consistency between pixels and decouple semantic components in the latent space, our InvSSL designs a multi-level dense contrastive learning mechanism. Extensive experiments on lung segmentation and skin lesion segmentation demonstrate that our InvSSL outperforms state-of-the-art methods for medical image segmentation. Our code will be released at https://github.com/funkdub/InvSSL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call