Abstract

Image super-resolution (SR) is a wide research topic, as it has found multiple applications in different fields. We implement image super-resolution for satellite images using a residual dense network (RDN). RDN is a CNN-based model, but unlike most CNN-based super-resolution models, it utilizes the hierarchic features from the input low resolution (LR) images and combines both the specific and general features present in the image, therefore resulting in a better performance. The novelty of our work lies in two aspects. First, we apply the residual dense network to remote sensing data to obtain higher structure similarity index metric (SSIM) and peak signal-to-noise ratio (PSNR) values than the existing models. Second, we use transfer learning due to the lack of training samples in remote sensing domain. Our RDN is first trained using an external dataset DIVerse2K (DIV2K). This model is then used to obtain high-resolution(HR) images of the remote sensing U.C Merced dataset, and the corresponding PSNR and SSIM values are computed for different scaling factors such as \(\times \)2, \(\times \)4 and \(\times \)8. The experimental results obtained using the proposed work demonstrates the better performance of RDN for the super-resolution of remote sensing images, when compared to the existing methods like super-resolution generative adversarial network (SRGAN) and transferred generative adversarial network (TGAN).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call