Abstract

The gap in resolution between existing global climate model output and that sought by decision-makers drives an ongoing need for climate downscaling. Here we test the extent to which developments in deep learning can out-perform existing statistical approaches for downscaling historical rainfall in the highly complex terrain setting of New Zealand. While deep learning removes the need for manual feature selection when extracting spatial-temporal information from predictor fields, several key considerations need to be addressed. These include: the chosen complexity of the network architecture, suitable loss functions tailored to the problem, as well as input data considerations of domain size and amount of training data required to provide adequate out-of-sample generalization. Sensitivity testing to these considerations reveals that a relatively simple convolutional neural network (CNN) architecture with carefully selected loss functions can considerably outperform existing statistical downscaling models based on multiple linear regression with manual feature selection. When aggregated across the entire region, the fraction of explained variance on wet days increased from 0.35 to 0.52, the root-mean-squared error reduced by over 20% and percentage biases for the 90th percentile of rainfall improved by over 25%. Using interpretable machine learning methods, we demonstrate that the CNN has been capable of self-learning physically plausible relationships between the large-scale atmospheric environment and extreme localized rainfall events. The historical performance and physical interpretability documented here lends support for wider development and application of deep learning in climate downscaling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call