Abstract

The Retinex Theory (RT) and its adaptations have gained significant popularity in the field of image processing. Nevertheless, traditional Retinex algorithms are generally customized to specific tasks. Besides, their use of logarithmic transformation (LT) to convert the multiplicative model to an additive one often results in the loss of texture information in the reflectance layer. In contrast to conventional methods, our approach involves direct decomposition of the observed image. This approach circumvents the necessity of intermediate transformations, thereby preserving essential texture features. In this study, we introduce a weight-aware ℓ1-ℓ2 technique based on the assumption that the reflectance layer is discontinuous and the illumination layer is spatially smooth. To preserve texture and structural information in the illumination layer, we introduce a weight-aware illumination component coefficient, ℓ1-norm, and estimate the reflectance component using ℓ2-norm. By utilizing weight-aware coefficients, the proposed technique is highly effective in addressing the issue of texture loss in the reflectance layer. Additionally, we employ ℓ2-norm to extract accurate information from the reflectance layer and implement a bright channel prior to prevent ambiguity during the decomposition process. We utilize an alternating minimization approach to obtain the optimal objective function solution and modify the illumination layer using gamma and non-linear stretching algorithms. Our proposed technique not only tackles the problem of texture duplication but also improves the quality of low-light images, and can be seamlessly integrated with various image and vision-based tasks. Our evaluation of eight benchmark datasets using 15 quality metrics, along with a variety of 22 conventional and modern algorithms, shows that the proposed algorithm is capable of delivering competitive qualitative and quantitative results without compromising its flexibility and scalability. Besides, the proposed model is evaluated on retinal images, and the results demonstrate a substantial improvement in the accuracy of learning-based models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call