Abstract

A self attention driven Deep Image Prior (DIP) framework has been proposed in this work for restoring satellite images corrupted by speckled interference and contrast deficiency. The retinex-based framework incorporated here-in leverages the benefits of DIP approach for image restoration, thus requiring only a single input image, eliminating the need for ground truth or training data. An attention framework is incorporated into the architecture of DIP networks to effectively capture fine textures, enhancing the restoration capability of the model. Two generative networks are employed to obtain the luminance and reflectance maps, with the model's loss functions specifically designed to tackle speckle interference and contrast distortions present in the input. These generated maps eventually reconstruct the enhanced version of the image. Satellite images from different sensors are used to demonstrate and compare the performance of the model. Various state-of-the-art models are evaluated and compared with the proposed strategy using different image quality metrics and statistical tests. The experimental results, incorporating both visual and statistical inferences, demonstrate the superiority and efficiency of the model. Additionally, an ablation analysis is performed to determine optimal regularization parameters, and the significance of integrating attention modules at different architecture layers is also demonstrated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call