Abstract

Synthetic solar irradiance models that generate irradiance time series are often trained on too few sites with limited climatic diversity. This often results in models which are overfit and cannot be globally applied. The impact of training data for solar energy applications is relatively unexplored and, therefore, of importance. A new Markov-downscaling methodology is proposed to test diverse training datasets. In terms of synthetic global horizontal irradiance (GHI) generation, occasionally, Markov downscaling was found to validate excellently, whereas linear interpolation failed resoundingly at all sites in this application. Markov downscaling was unsuitable if the aim was to obtain the actual result in cases such as historic gap filling, interpolation, or forecasting; thus, it is purely a synthetic model. A significant influence of training data is found when comparing 100 repetitions of all 752 combinations of training and testing data. Using different training data in the model significantly influenced the accuracy of the downscaled GHI. We find that climate similarity is not a fundamental driver for GHI similarities and that some form of site index derived from kc must have an influence. A particular site at Izana, Tenerife, performs consistently poor across all but 10 testing sites, exemplifying how idiosyncrasies of a model can produce unexpected global behaviour with minimal training data exploration. A considerable performance variation was observed resulting only from training data selection, as such warrants careful attention. It is concluded that all GHI methodological approaches that require training data must undertake a similar global investigation before they can be accepted as globally applicable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call