Abstract

As a powerful diagnostic tool, optical coherence tomography (OCT) has been widely used in various clinical setting. However, OCT images are susceptible to inherent speckle noise that may contaminate subtle structure information, due to low-coherence interferometric imaging procedure. Many supervised learning-based models have achieved impressive performance in reducing speckle noise of OCT images trained with a large number of noisy-clean paired OCT images, which are not commonly feasible in clinical practice. In this article, we conducted a comparative study to investigate the denoising performance of OCT images over different deep neural networks through an unsupervised Noise2Noise (N2N) strategy, which only trained with noisy OCT samples. Four representative network architectures including U-shaped model, multi-information stream model, straight-information stream model and GAN-based model were investigated on an OCT image dataset acquired from healthy human eyes. The results demonstrated all four unsupervised N2N models offered denoised OCT images with a performance comparable with that of supervised learning models, illustrating the effectiveness of unsupervised N2N models in denoising OCT images. Furthermore, U-shaped models and GAN-based models using UNet network as generator are two preferred and suitable architectures for reducing speckle noise of OCT images and preserving fine structure information of retinal layers under unsupervised N2N circumstances.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.