Abstract
In dynamic environments, such as ground-based optic observations for space targets, anisotropic turbulence and random noise tend to produce different discriminate image degradations at every moment. These degradations severely reduce the quality of the target images and make restoration of these images very difficult. However, an inconsistent degradation implies that there is complementary information in such images. In this paper, a ranking network (Ranknet) is first proposed to ensure that input sequences have a consistent distribution upon degradation and squeeze the spatial distribution of the sample set. Then, an extraction-refinement neural network (ERnet) is proposed to extract complementary features and blindly reconstruct a clean image of the observation target. In ERnet, an extraction subnetwork (EN) uses 3D convolutions to extract discriminate features from multiframe input sequences, and a refinement subnetwork (RN) based on 2D convolution restores clean images by refining the effective features. In addition, a spatial–temporal attention module (STAM) is devoted to enhancing features through utilization of the high-quality features. Experimental results on the restorations of space target images and motion blur images confirm the superior performance of ERnet as compared with other state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Applied Soft Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.