Abstract
The structure and texture of images are crucial for remote sensing image super-resolution. Generative adversarial networks (GANs) recover image details through adversarial training. However, the recovered images always have structural distortions on the one hand, and GANs are difficult to train on the other hand. In addition, some methods assist reconstruction by introducing prior information of the image, but this brings additional computational cost. To address this issue, we propose a novel structure-texture parallel embedding (SPE) method for super-resolution (SR) of remote sensing images. Our method does not require additional image priors to reconstruct high-quality images. Specifically, we use the global structure information and local texture information of the image in the ascending space to guide the reconstruction result of the image. Firstly, we design a structure preserving block (SPB) to extract global structural features in the ascending space of the image, so as to obtain global structure information for a priori representation. Then, we design a local texture attention module (LTAM) to restore richer texture details. We have conducted lots of experiments on Draper public dataset. Experimental results show that our proposed method not only achieves a better trade-off between computational cost and performance, but also outperforms the existing several SR methods in terms of objective index evaluation and subjective visual effects.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.