Abstract

Abstract In recent years, the new physics of the Sun has been revealed using advanced data with high spatial and temporal resolutions. The Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamic Observatory has accumulated abundant observation data for the study of solar activity with sufficient cadence, but their spatial resolution (about 1″) is not enough to analyze the subarcsecond structure of the Sun. On the other hand, high-resolution observation from large-aperture ground-based telescopes, such as the 1.6 m Goode Solar Telescope (GST) at the Big Bear Solar Observatory, can achieve a much higher resolution on the order of 0.″1 (about 70 km). However, these high-resolution data only became available in the past 10 yr, with a limited time period during the day and with a very limited field of view. The Generative Adversarial Network (GAN) has greatly improved the perceptual quality of images in image translation tasks, and the self-attention mechanism can retrieve rich information from images. This paper uses HMI and GST images to construct a precisely aligned data set based on the scale-invariant feature transform algorithm and t0 reconstruct the HMI continuum images with four times better resolution. Neural networks based on the conditional GAN and self-attention mechanism are trained to restore the details of solar active regions and to predict the reconstruction error. The experimental results show that the reconstructed images are in good agreement with GST images, demonstrating the success of resolution improvement using machine learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call