Abstract

Image super-resolution (SR) is an effective sol-ution to the limitation of the spatial resolution of video satellite images, which is caused by the degradation and compression in the imaging phase. For the processing of satellite videos, the commonly employed deep learning based single-frame SR (SFSR) framework have limited performance without using complementary information between the video frames. On the other side, the multi-frame SR (MFSR) can utilize temporal sub-pixel information to super-resolve the high-resolution (HR) imagery. However, although deeper and wider deep learning network provide powerful feature representations for SR methods, it has always been a challenge to accurately recon-struct the boundaries of ground objects in video satellite images. In this paper, to address these issues, we propose an edge-guided video super-resolution (EGVSR) framework for video satellite image SR, which couple MFSR model and edge-SFSR (E-SFSR) model in a unified network. The EGVSR framework is composed of an MFSR branch and an edge branch. The MFSR branch is used to extract the complementary features from the consecutive video frames. Concurrently, the edge branch acts as an SFSR model to translate the edge maps from the low-resolution modality to the HR one. At the final SR stage, the DBFM is built to focus on the promising inner repre-sentations of the features of the two branches and fuse them. Extensive experiments on video satellite imagery show that the proposed EGVSR method can achieve a superior performance when compared to the representative deep learning based SR methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call