Abstract

Abstract A workshop was recently held at Nagoya University (2017 October 31–November 2), sponsored by the Center for International Collaborative Research, at the Institute for Space-Earth Environmental Research, Nagoya University, Japan, to quantitatively compare the performance of today’s operational solar flare forecasting facilities. Building upon Paper I of this series, in Paper II we described the participating methods for this latest comparison effort, the evaluation methodology, and presented quantitative comparisons. In this paper, we focus on the behavior and performance of the methods when evaluated in the context of broad implementation differences. Acknowledging the short testing interval available and the small number of methods available, we do find that forecast performance: (1) appears to improve by including persistence or prior flare activity, region evolution, and a human “forecaster in the loop”; (2) is hurt by restricting data to disk-center observations; (3) may benefit from long-term statistics but mostly when then combined with modern data sources and statistical approaches. These trends are arguably weak and must be viewed with numerous caveats, as discussed both here and in Paper II. Following this present work, in Paper IV (Park et al. 2019) we will present a novel analysis method to evaluate temporal patterns of forecasting errors of both types (i.e., misses and false alarms). Hence, most importantly, with this series of papers, we demonstrate the techniques for facilitating comparisons in the interest of establishing performance-positive methodologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call