Abstract

ABSTRACTThe choice of the best interpolation algorithm of data gathered at a finite number of locations has been a persistently relevant topic. Typical papers take a single data set, a single set of data points, and a handful of algorithms. The process considers a subset I of the data points as known, builds the interpolant with each algorithm, applies it to the points of another subset C, and evaluates the MAE (mean absolute error), the RMSE (root mean square error), or any other metric over such points. The less these statistics are, the better the algorithm is, so a deterministic ranking between methods (without confidence level) can be derived based upon it. Ties between methods are usually not considered. In this article a complete protocol is proposed in order to build, with a modest additional effort, a ranking with a confidence level. To illustrate this point, the results of two tests are shown. In the first one, a simple Monte Carlo experiment was devised using irregularly distributed points taken from a reference DEM (digital elevation model) in raster format. Different metrics led to different rankings, suggesting that the choice of the metric to define the ‘best interpolation algorithm’ would need a trade-off. The second experiment used mean daily radiation data from an international interpolation comparison exercise and RMSE as the metric of success. Only five simple interpolation methods were employed. The ranking using this protocol anticipated correctly the first and second place, afterwards confirmed employing independent control data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call