Abstract

Due to increasingly complex factors of image degradation, inferring high-frequency details of remote sensing imagery is more difficult compared to ordinary digital photos. This paper proposes an adaptive multi-scale feature fusion network (AMFFN) for remote sensing image super-resolution. Firstly, the features are extracted from the original low-resolution image. Then several adaptive multi-scale feature extraction (AMFE) modules, the squeeze-and-excited and adaptive gating mechanisms are adopted for feature extraction and fusion. Finally, the sub-pixel convolution method is used to reconstruct the high-resolution image. Experiments are performed on three datasets, the key characteristics, such as the number of AMFEs and the gating connection way are studied, and super-resolution of remote sensing imagery of different scale factors are qualitatively and quantitatively analyzed. The results show that our method outperforms the classic methods, such as Super-Resolution Convolutional Neural Network(SRCNN), Efficient Sub-Pixel Convolutional Network (ESPCN), and multi-scale residual CNN(MSRN).

Highlights

  • Image super-resolution (SR), is a classical yet challenging problem in the field of computer vision.The goal of image super-resolution is to reconstruct a visually pleasing high-resolution (HR) image from one or more low-resolution (LR) images [1]

  • Adaptive feature information extraction and fusion would be better for the remote sensing imagery super-resolution, due to complex factors of image degradation and diversity of image content

  • Forremote remote sensing image super-resolution, this paper proposes an Adaptive can extract dense features directly from the original low-resolution

Read more

Summary

Introduction

Image super-resolution (SR), is a classical yet challenging problem in the field of computer vision. Xu et al [20] proposed a global dense feature fusion convolutional network (DFFNet) for single image super-resolution of different scale factors, in which cascaded feature fusion blocks were used to learn global features in both spatial and channel direction. Adaptive feature information extraction and fusion would be better for the remote sensing imagery super-resolution, due to complex factors of image degradation and diversity of image content. Where A0 is the original feature maps extracted from the low-resolution remote sensing imagery, w0 corresponds filters in the convolutional layer, which is 128 filters with the spatial size of 3 × 3 in this paper, b0 denotes the biases of the convolutional layer, and ‘∗’ represents the convolution operation. L1 function is chosen to avoid introducing unnecessary training tricks and reduce computations

Adaptive Multi-Scale Feature Extraction
Multi-Scale Feature Extraction Unit
Feature Filtering Unit
Feature Gating Unit
Datasets and Performance Metrics
Number
Adaptive
Results with AMFFN
To ensure
4.Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.