Abstract

Measuring bias is important as it helps identify flaws in quantitative forecasting methods or judgmental forecasts. It can, therefore, potentially help improve forecasts. Despite this, bias tends to be under-represented in the literature: many studies focus solely on measuring accuracy. Methods for assessing bias in single series are relatively well-known and well-researched, but for datasets containing thousands of observations for multiple series, the methodology for measuring and reporting bias is less obvious. We compare alternative approaches against a number of criteria when rolling-origin point forecasts are available for different forecasting methods and for multiple horizons over multiple series. We focus on relatively simple, yet interpretable and easy-to-implement metrics and visualization tools that are likely to be applicable in practice. To study the statistical properties of alternative measures we use theoretical concepts and simulation experiments based on artificial data with predetermined features. We describe the difference between mean and median bias, describe the connection between metrics for accuracy and bias, provide suitable bias measures depending on the loss function used to optimise forecasts, and suggest which measures for accuracy should be used to accompany bias indicators. We propose several new measures and provide our recommendations on how to evaluate forecast bias across multiple series.

Highlights

  • In order to obtain a simpler metric for median bias, we propose to apply the Overestimation Percentage corrected (OPc), which we introduced earlier, to multiple series

  • Forecast Evaluation Workflows (FEWs) For the point forecast evaluation setup we defined earlier, we propose two alternative step-by-step procedures for forecast evaluation and comparison depending on the loss function used to optimise and compare forecasts

  • Given the setup and the above criteria, we conducted simulation experiments to evaluate the appropriateness of alternative error measures

Read more

Summary

Introduction

Bias refers to a systematic error. In a forecasting context, bias is usually measured as a mean forecast error (Hill, 2012, p. 140). In a forecasting context, bias is usually measured as a mean forecast error This gives an indication of mean bias which represents a tendency to produce point forecasts that are typically either too high or low in comparison with the corresponding outcomes, irrespective of their size. Less commonly measured is regression (or slope) bias, which occurs where the systematic discrepancy between the forecast and outcome depends on the size of the forecast (Goodwin, 2000) so that a unit increase in the point forecast tends not to equate to a unit increase in the outcome. Regression bias shows how the mean forecast error depends on the forecast itself

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call