Deep learning models have achieved commendable success in the analysis of tasks related to fundus images. However, the performance of many models is affected by the quality of fundus images. A common quality issue observed in fundus images is the presence of severe black shadow artefact, primarily caused by opacities in the refractive media or due to insufficient or uneven illumination. Such low-quality images can compromise the model training and result in the models learning incorrect feature representations. The removal of black shadows can be regarded as a preprocessing problem in the enhancement of degraded images. Solutions typically involve either increasing the overall brightness of the image or restoring the dark shadowy areas. Prior work on increasing image brightness has often utilized generative adversarial networks (GANs), while restoration has been approached with autoencoders and variational autoencoders (VAEs). However, approaches that focus on brightening often fall short in properly addressing local degradations, and restoration techniques can lead to loss of details or over-smoothing in the shadow areas.In this study, we introduce a method named ClarityDiffuseNet, a model for restoring low-quality fundus images based on diffusion generative models targeting severe black shadows. Our method restores areas with black shadows from regions of high quality, enhancing the image to be richer in detail and visually closer to artefact-free images. Compared to models based on GANs and inpainting methods, our approach demonstrates superior performance on four benchmark public datasets, with Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity index (SSIM) indices significantly surpassing the state-of-the-art models by 7% and 9%, respectively. Our method demonstrates notable improvements in downstream tasks, such as disease diagnosis—evidenced by 9% increase in the area under the curve (AUC) metric when tested on low-quality datasets—in vessel segmentation, which saw about 6% improvement in the Dice coefficient under similar conditions. These outcomes underscore the substantial promise of diffusion generative models within the realm of fundus image restoration, highlighting their effectiveness in enhancing image quality for further analysis.
Read full abstract