Abstract
The volume of data being collected in solar physics has exponentially increased over the past decade and with the introduction of the Daniel K. Inouye Solar Telescope (DKIST) we will be entering the age of petabyte solar data. Automated feature detection will be an invaluable tool for post-processing of solar images to create catalogues of data ready for researchers to use. We propose a deep learning model to accomplish this; a deep convolutional neural network is adept at feature extraction and processing images quickly. We train our network using data from Hinode/Solar Optical Telescope (SOT) Halpha images of a small subset of solar features with different geometries: filaments, prominences, flare ribbons, sunspots and the quiet Sun (i.e. the absence of any of the other four features). We achieve near perfect performance on classifying unseen images from SOT (≈ 99.9%) in 4.66 seconds. We also for the first time explore transfer learning in a solar context. Transfer learning uses pre-trained deep neural networks to help train new deep learning models i.e. it teaches a new model. We show that our network is robust to changes in resolution by degrading images from SOT resolution ({approx}, 0.33^{prime prime } at lambda =6563 Å) to Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) resolution ({approx}, 1.2^{prime prime }) without a change in performance of our network. However, we also observe where the network fails to generalise to sunspots from SDO/AIA bands 1600/1700 Å due to small-scale brightenings around the sunspots and prominences in SDO/AIA 304 Å due to coronal emission.
Highlights
With each new solar physics mission/telescope, instruments are improving in spatial, temporal and/or wavelength resolution
Fletcher equals greater volumes of data. This has led to an exponential increase in the amount of data acquired in the past decade, from < 10 TB per year from Hinode/Solar Optical Telescope (SOT) in 2006 (Tsuneta et al, 2008) to 500 TB per year from the Solar Dynamics Observatory (SDO) in 2012 (Schwer et al, 2002) to 10 000 TB per year expected from the Daniel K
We have shown that a deep convolutional neural network can learn the geometry of features on the Sun
Summary
With each new solar physics mission/telescope, instruments are improving in spatial, temporal and/or wavelength resolution. This has led to an exponential increase in the amount of data acquired in the past decade, from < 10 TB per year from Hinode/Solar Optical Telescope (SOT) in 2006 (Tsuneta et al, 2008) to 500 TB per year from the Solar Dynamics Observatory (SDO) in 2012 (Schwer et al, 2002) to 10 000 TB per year expected from the Daniel K. To understand NNs, we must look back to the conception of machine learning, namely Rosenblatt’s perceptron This is a simple setup modelled on a neuron in the brain: there are many inputs with varying electrical signals which are integrated, and depending on a threshold the neuron will either fire or not. An NN is a system of interconnected nodes which learn to perform a specific task after being trained in a supervised manner
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.