With the rapid advancement of space technology, space target observation images have become an essential tool for the precise measurement and shape analysis of spacecraft. However, due to the challenging conditions of the space environment, these images often suffer from blurring and distortion, which hampers the effectiveness of spacecraft observation and measurement missions. Although recent progress has been made in super-resolution reconstruction techniques, the limited processing capacity of on-board equipment prevents the direct deployment of these high-complexity methods. In this paper, we propose an efficient and lightweight super-resolution reconstruction algorithm called the Pyramid Frequency-Aware Network for space target observation images. Specifically, we use a divide-and-conquer strategy to separately process low-frequency and high-frequency features, ensuring high-quality feature extraction while reducing the number of parameters. To further improve the model’s ability to capture edge and detailed texture information, we introduce a pyramidal wavelet decomposition and a multi-scale large separable kernel attention model. For high-frequency information, we design an enhanced fusion convolution block that facilitates multi-scale feature extraction and channel mixing. Furthermore, we have established a dataset of space target observation images, which can serve as a valuable reference for future studies on the reconstruction of such images. Extensive experimental results demonstrate that our Pyramid Frequency-Aware Network achieves an excellent balance between peak signal-to-noise ratio, structural similarity index, number of parameters, flops, and running time, both on public standard datasets and our self-built space target observation image dataset. Additionally, the network is lightweight enough to be deployed on resource-constrained equipment, such as satellites.
Read full abstract