Abstract

AbstractDeep generative models have recently become popular in heliophysics for their capacity to fill in gaps in solar observational data sets, thereby helping mitigating the data scarcity issue faced in space weather forecasting. A particular type of deep generative models, called conditional Generative Adversarial Networks (cGAN), has been used since a few years in the context of image‐to‐image (I2I) translation on solar observations. These algorithms have however hyperparameters whose values might influence the quality of the synthetic image. In this work, we use magnetograms produced by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) and EUV images from the Atmospheric Imaging Assembly (AIA) for the problem of generating Artificial Intelligence (AI) synthetic magnetograms from multiple SDO/AIA channels using cGAN, and more precisely the Pix2PixCC algorithm. We perform a systematic study of the most important hyperparameters to investigate which hyperparameter might generate magnetograms of highest quality with respect to the Structural Similarity Index. We propose a structured way to perform training with various hyperparameter values, and provide diagnostic and visualization tools of the generated versus targeted image. Our results shows that when using a larger number of filters in the convolution blocks of the cGAN, the fine details in the generated magnetogram are better reconstructed. Adding several input channels besides the 304 Å channel does not improve the quality of generated magnetogram, but the hyperparameters controlling the relative importance of the different loss functions in the optimization process have an influence on the quality of the results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call