Abstract

Video satellite imagery has become a hot research topic in Earth observation due to its ability to capture dynamic information. However, its high temporal resolution comes at the expense of spatial resolution. In recent years, deep learning (DL) based super-resolution (SR) methods have played an essential role to improve the spatial resolution of video satellite images. Instead of fully considering the degradation process, most existing DL-based methods attempt to learn the relationship between low-resolution (LR) satellite video frames and their corresponding high-resolution (HR) ones. In this paper, we propose model-based deep neural networks for video satellite imagery SR (VSSR). The VSSR is composed of three main modules: degradation estimation module, intermediate image generation module, and multi-frame feature fusion module. First, the blur kernel and noise level of LR video frames are flexibly estimated by the degradation estimation module. Second, an intermediate image generation module is proposed to iteratively solve two optimal subproblems and the outputs of this module are intermediate SR frames. Third, a three-dimensional (3D) feature fusion subnetwork is leveraged to fuse the features from multiple video frames. Different from previous video satellite SR methods, the proposed VSSR is a multi-frame-based method that can merge the advantages of both learning-based and model-based methods. Experiments on real-world Jilin-1 and OVS-1 video satellite images have been conducted and the SR results demonstrate that the proposed VSSR achieves superior visual effects and quantitative performance compared with the state-of-the-art methods.

Highlights

  • Over the past few years, video satellite imagery [1,2,3,4] has received considerable attention in the remote sensing and aerospace field

  • We validate the effectiveness of our proposed video satellite imagery SR (VSSR) by conducting a group of experiments on the real-world Jilin-1 and OVS-1 video satellite data

  • The VSSR is compared against state-ofthe-art SR methods, and the experimental results on the Jilin-1 and OVS-1 data are displayed

Read more

Summary

Introduction

Over the past few years, video satellite imagery [1,2,3,4] has received considerable attention in the remote sensing and aerospace field. Compared with the traditional satellites that obtain static images [5,6,7,8], video satellite provides a novel way to capture continuous videos. It can acquire dynamic information from the objects on the Earth’s surface and has great advantages in dynamic monitoring, such as moving ship detection [9], object tracking [10], and object detection [11]. Note that SR is a classical ill-posed inverse problem [15] that can increase the spatial resolution and clarity of lowquality images, it is an important but challenging task in video satellite imagery

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call