Abstract

Convolutional neural networks have made a great breakthrough in recent remote sensing image super-resolution (SR) tasks. Most of these methods adopt upsampling layers at the end of the models to perform enlargement, which ignores feature extraction in the high-dimension space, and thus, limits SR performance. To address this problem, we propose a new SR framework for remote sensing images to enhance the high-dimensional feature representation after the upsampling layers. We name the proposed method as a transformer-based enhancement network (TransENet), where transformers are introduced to exploit features at different levels. The core of the TransENet is a transformer-based multistage enhancement structure, which can be combined with traditional SR frameworks to fuse multiscale high-/low-dimension features. Specifically, in this structure, the encoders aim to embed the multilevel features in the feature extraction part and the decoders are used to fuse these encoded embeddings. Experimental results demonstrate that our proposed TransENet can improve super-resolved results and obtain superior performance over several state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.