Abstract

Cross-modality image estimation involves the generation of images of one medical imaging modality from that of another modality. Convolutional neural networks (CNNs) have been shown to be useful in image-to-image intensity projections, in addition to identifying, characterising and extracting image patterns. Generative adversarial networks (GANs) use CNNs as generators and estimated images are classified as true or false based on an additional discriminator network. CNNs and GANs within the image estimation framework may be considered more generally as deep learning approaches, since medical images tend to be large in size, leading to the need for large neural networks. Most research in the CNN/GAN image estimation literature has involved the use of MRI data with the other modality primarily being PET or CT. This review provides an overview of the use of CNNs and GANs for cross-modality medical image estimation. We outline recently proposed neural networks and detail the constructs employed for CNN and GAN image-to-image synthesis. Motivations behind cross-modality image estimation are outlined as well. GANs appear to provide better utility in cross-modality image estimation in comparison with CNNs, a finding drawn based on our analysis involving metrics comparing estimated and actual images. Our final remarks highlight key challenges faced by the cross-modality medical image estimation field, including how intensity projection can be constrained by registration (unpaired versus paired data), use of image patches, additional networks, and spatially sensitive loss functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call