Abstract

Nowadays, the most successful approaches for image-to-image translation are those based on the use of generative adversarial networks (GANs). These novel deep learning frameworks represent a reference technique for learning generative models. In particular, GANs allow the training of image-to-image translation tasks using unpaired data, which enables the use of these approaches in numerous application domains where the paired data is difficult to obtain. Nevertheless, in medical imaging the paired data can be easily gathered due to the common use of complementary imaging techniques in modern clinical practice. For instance, the availability of paired data has been successfully exploited for the multimodal reconstruction of retinal images, which consists in an image-to-image translation between complementary retinal imaging modalities. In this context, the multimodal reconstruction does not only provide an estimate of an additional modality but also allows to learn relevant retinal patterns that are useful for transfer learning purposes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call