Abstract
Deep learning has recently attracted extensive attention and developed significantly in remote sensing image super-resolution. Although remote sensing images are composed of various scenes, most existing methods consider each part equally. These methods ignore the salient objects (e.g., buildings, airplanes, and vehicles) that have more complex structures and require more attention in recovery processing. This paper proposes a saliency-guided remote sensing image super-resolution (SG-GAN) method to alleviate the above issue while maintaining the merits of GAN-based methods for the generation of perceptual-pleasant details. More specifically, we exploit the salient maps of images to guide the recovery in two aspects: On the one hand, the saliency detection network in SG-GAN learns more high-resolution saliency maps to provide additional structure priors. On the other hand, the well-designed saliency loss imposes a second-order restriction on the super-resolution process, which helps SG-GAN concentrate more on the salient objects of remote sensing images. Experimental results show that SG-GAN achieves competitive PSNR and SSIM compared with the advanced super-resolution methods. Visual results demonstrate our superiority in restoring structures while generating remote sensing super-resolution images.
Highlights
Academic Editors: Igor Yanovsky and College of Control Science and Engineering, China University of Petroleum, Qingdao 266580, China; College of Oceanography and Space Informatics, China University of Petroleum, Qingdao 266580, China; College of Mechanical and Electrical Engineering, China University of Petroleum, Qingdao 266580, China; Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
We observe that most of the compared methods would suffer from blurring edges and noticeable artifacts, especially the saliency regions of the image
Compared with the interpolation method (Bicubic), the details of the image generated by the FSRCNN method are improved
Summary
Academic Editors: Igor Yanovsky and College of Control Science and Engineering, China University of Petroleum, Qingdao 266580, China; College of Oceanography and Space Informatics, China University of Petroleum, Qingdao 266580, China; College of Mechanical and Electrical Engineering, China University of Petroleum, Qingdao 266580, China; Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China. Deep learning has recently attracted extensive attention and developed significantly in remote sensing image super-resolution. Remote sensing images are composed of various scenes, most existing methods consider each part . These methods ignore the salient objects (e.g., buildings, airplanes, and vehicles) that have more complex structures and require more attention in recovery processing. This paper proposes a saliency-guided remote sensing image super-resolution (SG-GAN) method to alleviate the above issue while maintaining the merits of GAN-based methods for the generation of perceptual-pleasant details. The well-designed saliency loss imposes a second-order restriction on the super-resolution process, which helps SG-GAN concentrate more on the salient objects of remote sensing images
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.