Space target super-resolution (SR) is a domain-specific single image SR problem aiming to help distinguish the satellite and spacecrafts from numerous space debris. Compared to the other object SR problem, images for space target are always in low quality with varies of degradation condition, as a result of long distance and motion blur, which significantly reduces the manual classification reliability, especially for these small targets, e.g., satellite payloads. To address this challenge, we present an end-to-end SR and deblurring network (SRDN). Concretely, focusing on the low-resolution (LR) space target images with blind motion blur, we integrate the SR and deblur function together, improving the image quality by a unified generative adversarial network (GAN)-based framework. We implement a deblur module by using contrastive learning to extract degradation feature and add symmetrical downsampling and upsampling modules to the SR network in order to restore texture information, while shortcut connections are redesigned to maintain the global similarity. Extensive experiments on the public satellite dataset, BUAA-SID-share1.5, demonstrate that our network outperforms the state-of-the-art SR and deblur methods.
Read full abstract