Abstract
This paper explores the application of deep learning-based methods for the multimodal reconstruction of fluorescein angiography from retinography. The objective of this multimodal reconstruction is not only to estimate an invasive modality from a non-invasive one, but also to apply the learned models for transfer learning or domain adaption. Deep neural networks have demonstrated to be successful at learning the mapping between complementary image domains, using both paired or unpaired data. The paired data allows taking advantage of the rich information that is available from the pixelwise correspondence of paired images. However, this requires the pre-registration of the multimodal image pairs. In the case of the retinal images, the multimodal registration is a challenging task that may fail in complex scenarios, such as severe pathological cases or low quality samples. In contrast, the use of generative adversarial networks allows learning the mapping between image domains using unpaired data. This avoids the preregistration of the images and allows including all the available data for training. In this work, we analyze both paired and unpaired deep learning-based approaches for the multimodal reconstruction of retinal images. The objective is to understand the implications of each alternative and the considerations for their future usage. For that purpose, we perform several experiments with the focus on producing a fair comparison between paired and unpaired approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.