Abstract

This paper explores the application of deep learning-based methods for the multimodal reconstruction of fluorescein angiography from retinography. The objective of this multimodal reconstruction is not only to estimate an invasive modality from a non-invasive one, but also to apply the learned models for transfer learning or domain adaption. Deep neural networks have demonstrated to be successful at learning the mapping between complementary image domains, using both paired or unpaired data. The paired data allows taking advantage of the rich information that is available from the pixelwise correspondence of paired images. However, this requires the pre-registration of the multimodal image pairs. In the case of the retinal images, the multimodal registration is a challenging task that may fail in complex scenarios, such as severe pathological cases or low quality samples. In contrast, the use of generative adversarial networks allows learning the mapping between image domains using unpaired data. This avoids the preregistration of the images and allows including all the available data for training. In this work, we analyze both paired and unpaired deep learning-based approaches for the multimodal reconstruction of retinal images. The objective is to understand the implications of each alternative and the considerations for their future usage. For that purpose, we perform several experiments with the focus on producing a fair comparison between paired and unpaired approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call