Abstract

Solar eruptions and the solar wind are sources of space weather disturbances, and extreme-ultraviolet (EUV) observations are widely used to research solar activity and space weather forecasts. Fengyun-3E is equipped with the Solar X-ray and Extreme Ultraviolet Imager, which can observe EUV imaging data. Limited by the lower resolution, however, we research super-resolution techniques to improve the data quality. Traditional image interpolation methods have limited expressive ability, while deep-learning methods can learn to reconstruct high-quality images through training on paired data sets. There is a wide variety of super-resolution models. We try these three representative models: Real-ESRGAN combined with generative adversarial networks, residual channel-attention networks (RCAN) based on channel attention, and SwinIR, based on self-attention. Instruments on different satellites differ in observation time, angle, and resolution, so we selected Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) 193 Å images with similar wavelengths as a reference and used a feature-based method for image registration to eliminate slight deformations to build training data sets. Finally, we compare the above methods in their evaluation metrics and visual quality. RCAN has the highest peak signal-to-noise ratio and structural similarity evaluation. Real-ESRGAN model is the best in the Learned Perceptual Image Patch Similarity index, and its results visually show that it has more highly detailed textures. The corrected super-resolution results can complement the SDO/AIA data to provide solar EUV images with a higher temporal resolution for space weather forecasting and solar physics research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call