Abstract

AbstractMany Machine Learning (ML) systems, especially deep neural networks, are fundamentally regarded as black boxes since it is difficult to fully grasp how they function once they have been trained. Here, we tackle the issue of the interpretability of a high‐accuracy ML model created to model the flux of Earth's radiation belt electrons. The Outer RadIation belt Electron Neural net (ORIENT) model uses only solar wind conditions and geomagnetic indices as input features. Using the Deep SHAPley additive explanations (DeepSHAP) method, for the first time, we show that the “black box” ORIENT model can be successfully explained. Two significant electron flux enhancement events observed by Van Allen Probes during the storm interval of 17–18 March 2013 and non‐storm interval of 19–20 September 2013 are investigated using the DeepSHAP method. The results show that the feature importance calculated from the purely data‐driven ORIENT model identifies physically meaningful behavior consistent with current physical understanding. This work not only demonstrates that the physics of the radiation belt was captured in the training of our previous model, but that this method can also be applied generally to other similar models to better explain the results and to potentially discover new physical mechanisms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call