Abstract

This paper presents a novel unsupervised segmentation method for the 3D microstructure in micro-computed tomography (micro-CT) images. Micro-CT scanning of resected lung cancer specimens can capture detailed and surrounding anatomical structures of them. However, its segmentation is difficult. Recently, many unsupervised learning methods have become greatly improved, especially in their ability to learn generative models such as variational auto-encoders (VAEs) and generative adversarial networks (GANs). Meanwhile, however, most of the recent segmentation methods using deep neural networks continue to rely on supervised learning. Therefore, it is rather difficult for these segmentation methods to cope with the growing number of unlabeled micro-CT images. In this paper, we develop a generative model that can infer segmentation labels by extending α-GAN, a principled combination that iterates variational inference and adversarial learning. Our method consists of two phases. In the first phase, we train our model by iterating two steps: (1) inferring pairs of continuous and discrete latent variables of image patches randomly extracted from an unlabeled image and (2) generating image patches from the inferred pairs of latent variables. In the second phase, our trained model assigns labels to patches from a target image in order to obtain the segmented image. We evaluated our method using three micro-CT images of a lung cancer specimen. The aim was to automatically divide each image into three regions: invasive carcinoma, noninvasive carcinoma, and normal tissue. Our experiments show promising results both quantitatively and qualitatively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call