Abstract

Recently, the application of satellite remote sensing images is becoming increasingly popular, but the observed images from satellite sensors are frequently in low-resolution (LR). Thus, they cannot fully meet the requirements of object identification and analysis. To utilize the multi-scale characteristics of objects fully in remote sensing images, this paper presents a multi-scale residual neural network (MRNN). MRNN adopts the multi-scale nature of satellite images to reconstruct high-frequency information accurately for super-resolution (SR) satellite imagery. Different sizes of patches from LR satellite images are initially extracted to fit different scale of objects. Large-, middle-, and small-scale deep residual neural networks are designed to simulate differently sized receptive fields for acquiring relative global, contextual, and local information for prior representation. Then, a fusion network is used to refine different scales of information. MRNN fuses the complementary high-frequency information from differently scaled networks to reconstruct the desired high-resolution satellite object image, which is in line with human visual experience (“look in multi-scale to see better”). Experimental results on the SpaceNet satellite image and NWPU-RESISC45 databases show that the proposed approach outperformed several state-of-the-art SR algorithms in terms of objective and subjective image qualities.

Highlights

  • Remote sensing satellites, which observe objects on the ground from outer space, are widely used in various real applications, such as environmental monitoring, resource exploration, disaster warning, and military applications

  • To verify the performance of multi-scale residual neural network (MRNN), we conducted experiments on two satellite image datasets, namely, SpaceNet image and Northwestern Polytechnical University (NWPU)-RESISC45, to ensure that all algorithms used the same amount of training data

  • This paper presents a multi-scale residual convolutional neural networks (CNNs), namely MRNN, based on the characteristics of satellite images, for enhancing SR performance

Read more

Summary

Introduction

Remote sensing satellites, which observe objects on the ground from outer space, are widely used in various real applications, such as environmental monitoring, resource exploration, disaster warning, and military applications. Zeiler et al [37] visualized convolutional network to indicate that different network layers have varying roles in representing the features that simulate the ventral pathway to enhance their performance [38–42] They indicated that hierarchical features of different scales effectively improve the capability of acquiring global information. Inspired by the observation of “look closer to see better” [32], we propose a flexible and versatile multi-scale residual deep neural network for satellite image SR, named MRNN, for the hierarchical reconstruction of satellite imagery with HR detail information. In this network, multi-scale receptive fields are similar to the observation from different distances by human eyes.

Satellite Imagery SR Based on Multi-Scale Residual Neural Network
Multi-Scale SR
Residual Fusion Network
Loss Function
Experimental Data
Training Parameters
Complementarity Analysis of Multi-Scale Residual
Performance and Model Trade-Offs
Visualizing the Learned Filters and Feature Maps
Performance Comparison with State-Of-The-Art SR Algorithms
Multi-Scale Prior Information
Residual Learning Versus Pixel Learning
Subpixel Network Versus Pixel Network
Applicability of the Proposed Method
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call