In dynamic environments, such as ground-based optic observations for space targets, anisotropic turbulence and random noise tend to produce different discriminate image degradations at every moment. These degradations severely reduce the quality of the target images and make restoration of these images very difficult. However, an inconsistent degradation implies that there is complementary information in such images. In this paper, a ranking network (Ranknet) is first proposed to ensure that input sequences have a consistent distribution upon degradation and squeeze the spatial distribution of the sample set. Then, an extraction-refinement neural network (ERnet) is proposed to extract complementary features and blindly reconstruct a clean image of the observation target. In ERnet, an extraction subnetwork (EN) uses 3D convolutions to extract discriminate features from multiframe input sequences, and a refinement subnetwork (RN) based on 2D convolution restores clean images by refining the effective features. In addition, a spatial–temporal attention module (STAM) is devoted to enhancing features through utilization of the high-quality features. Experimental results on the restorations of space target images and motion blur images confirm the superior performance of ERnet as compared with other state-of-the-art methods.
Read full abstract