Abstract

Visible light optical coherence tomography (VIS-OCT) of the human retina is an emerging imaging modality that uses shorter wavelengths in visible light range than conventional near-infrared (NIR) light. It provides one-micron level axial resolution to better separate stratified retinal layers, as well as microvascular oximetry. However, due to the practical limitation of laser safety and comfort, the permissible illumination power is much lower than NIR OCT, which can be challenging to obtain high-quality VIS-OCT images and subsequent image analysis. Therefore, improving VIS-OCT image quality by denoising is an essential step in the overall workflow in VIS-OCT clinical applications. In this paper, we provide the first VIS-OCT retinal image dataset from normal eyes, including retinal layer annotation and "noisy-clean" image pairs. We propose an efficient co-learning deep learning framework for parallel self-denoising and segmentation simultaneously. Both tasks synergize within the same network and improve each other's performance. The significant improvement of segmentation (2% higher Dice coefficient compared to segmentation-only process) for ganglion cell layer (GCL), inner plexiform layer (IPL) and inner nuclear layer (INL) is observed when available annotation drops to 25%, suggesting an annotation-efficient training. We also showed that the denoising model trained on our dataset generalizes well for a different scanning protocol.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call