Abstract

Generative Adversarial Networks (GAN) are emerging as an exciting training paradigm which promises a step improvement to the impressive feature learning capabilities of deep neural networks. Unlike supervised learning approaches, GAN learns generalizable features without requiring labeled images to achieve new capabilities like distinguishing previously unseen anomalies, creating novel instances of data and factorizing learned features into explainable dimensions in fully unsupervised fashion. The advanced feature learning property of GAN will enable the next generation of computational image understanding tasks. However, GAN models are difficult to train to converge towards good models, especially for high resolution and high dimensional datasets like image volumes. We develop a GAN approach to learn a generative model of T1-contrast 3D MRI image volumes of the healthy human brain by training on 1112 MRI images from the Human Connectome Project. Our method utilizes a first unconditional Super-Resolution GAN, dubbed the shape network, to learn the 3D shape variations in adult brains and a second conditional pix2pix GAN, dubbed the texture network, to upgrade image slices with realistic local contrast patterns. Novel 3D MRI images are synthesized by first applying the 3D voxel-wise deformation map which is generated from the shape network to deform the Montreal Neurological Institute (MNI) brain template and subsequently performing style transfer on axial-wise slices using the texture network. The Maximum Mean Discrepancy (MMD) and Multi-scale Structural Similarity Index Measure (MS-SSIM) scores of MRI image volumes synthesized using our GAN approach are competitive with state-of-art GAN methods. Our work establishes the feasibility of an alternative approach to high-dimensional GAN learning - splitting the type of information content learned among several GANs can be an effective form of regularization and complementary to latent code shaping or super-resolution approaches in state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call