Optical coherence tomography (OCT) imaging has emerged as a promising diagnostic tool, especially in ophthalmology. However, speckle noise and downsampling significantly degrade the quality of OCT images and hinder the development of OCT-assisted diagnostics. In this article, we address the super-resolution (SR) problem of retinal OCT images using a statistical modeling point of view. In the first step, we utilized Weibull mixture model (WMM) as a comprehensive model to establish the specific features of the intensity distribution of retinal OCT data, such as asymmetry and heavy tailed. To fit the WMM to the low-resolution OCT images, expectation-maximization algorithm is used to estimate the parameters of the model. Then, to reduce the existing noise in the data, a combination of Gaussian transform and spatially constraint Gaussian mixture model is applied. Now, to super-resolve OCT images, the expected patch log-likelihood is used which is a patch-based algorithm with multivariate GMM prior assumption. It restores the high-resolution (HR) images with maximum a posteriori (MAP) estimator. The proposed method is compared with some well-known super-resolution algorithms visually and numerically. In terms of the mean-to-standard deviation ratio (MSR) and the equivalent number of looks, our method makes a great superiority compared to the other competitors. The proposed method is simple and does not require any special preprocessing or measurements. The results illustrate that our method not only significantly suppresses the noise but also successfully reconstructs the image, leading to improved visual quality.
Read full abstract