Abstract

Super-resolution (SR) for satellite video data has been a hot research topic in the field of remote sensing video analysis. The existing satellite video SR methods assume that the blur kernel in the imaging degradation model is known. However, the blur kernel in real satellite videos is usually unknown, which inevitably results in poor performance when the true blur kernel is not consistent with a predefined blur kernel. To address this issue, this article proposes a deep joint estimation network for satellite video SR (JENSVSR), which jointly estimates blur kernels and SR frames. Specifically, JENSVSR is composed of a video SR subnetwork and a blur kernel estimation subnetwork. On one hand, the video SR subnetwork makes use of multiple video frames to generate super-resolved satellite frames. To effectively fuse information from adjacent frames, an alignment and fusion module is proposed in the feature space. On the other hand, the blur estimation subnetwork is also proposed to predict blur kernels. The two subnetworks are coupled by cross-task feature fusion modules (CTFFMs) to achieve joint estimation rather than two-step independent estimation. The performance of our proposed method is evaluated on synthetic and real satellite videos. The experimental results show that our proposed method is superior to the current state-of-the-art SR methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.