Повышение качества прогнозирования простейшими методами комбинирования отдельных прогнозов
Combining forecasts is considered the easiest way to improve the forecast quality compared to individual models. In this paper, we test the capabilities of the simplest methods of combination, such as simple averages and estimates based on the standard error of previous forecasts, to improve the performance of short-run forecasts of five resource price indicators (oil and metals). The basis of the work is the Gaidar Institute forecasts database, which provides the database of primary forecasts and allows you to calculate their combinations in real time. Based on the obtained results we conclude that even the simplest methods of combination are a way to improve the accuracy of forecasts. In addition, in the case of resource prices, one can even single out a group of methods (namely, combining with weights inversely proportional to the squared errors of individual forecasts) that provide the maximum gain in quality for the most periods.
- Conference Article
- 10.36334/modsim.2011.d10.james
- Dec 12, 2011
It is well known in the forecasting literature that combining forecasts from different models can often lead to superior forecast performance, at least in the Mean Squares Error (MSE) sense. It has also been noted that combining forecasts by simple averaging often performs better than more sophisticated weighting schemes, although simple averages tend to ignore correlations between forecast errors. However, it is unclear whether these stylized facts hold under different forecast criteria. This is particularly important when evaluating the performance of Value-at-Risk forecasts where MSE is not an appropriate measure. In practice the VaR performance is measured against the Back-testing procedure as outlined in Basel Accord. Given the role of VaR in risk management, it would be important to investigate if forecast combination provided any benefit in forecasting VaR. An interesting implication of this study is that, if forecast combination does in fact provide superior VaR forecasts over individual models, then it also provides a convenience way to combine qualitative forecasts (from expert opinion) and quantitative forecasts (from quantitative models). The combination of qualitative and quantitative forecasts may in fact, enhance the forecast accuracy of VaR further. The aim of this paper is to provide an empirical evaluation of forecast combination for Value-at-Risk. Value-at-Risk forecasts based on four different volatility models, namely, EGARCH, IGARCH, Stochastic Volatility, will be constructed and combined. The forecast performance of the combined forecasts will be compared to the forecast performances of each of the individual models. Two weighting schemes are being considered in this paper, namely, simple weighted average and Quantile Regression (QR). The empirical performances of these forecasts will be based on the percentages of violation as proposed in the Basel Accord with two sets of daily data, namely FTSE and S&P 500, for the period 3 January 1996 to 3 August 2010. The results show that, overall, (i) forecast combination performed better than individual models and (ii) simple weighted average performed better than QR. These results are consistent with the stylised findings in the forecast combination literature. Thus, the paper provided empirical evidence supporting the use of forecast combination in forecasting VaR thresholds.
- Research Article
10
- 10.6057/2012tcrr03.06
- Dec 20, 2018
- Tropical Cyclone Research and Review
Operational Tropical Cyclone Forecast Verification Practice in the Western North Pacific Region
- Preprint Article
1
- 10.22004/ag.econ.12314
- Apr 1, 1987
The contention advanced in this paper is that forecast performance could be improved if short-term commodity forecasters were to consider formally the use of a variety of forecasting methods, rather than seeking to improve one selected method. Many researchers have demonstrated that a linear combination of forecasts can produce a composite superior to the individual component forecasts. Using a case study of two Bureau of Agricultural Economics' forecast series and alternative, time series model forecasts of the same series, four methods of deriving composite forecasts are applied on an ex ante basis and are thus evaluated as a means of improving the Bureau's forecast performance. Despite the fact that the authors could not, by combining the available forecasts, form a superior composite forecast, the application highlights the suitability of this approach for reviewing the performance of forecasting methods on a formal basis, and did prove useful in exposing weaknesses and strengths in BAE market information forecasts which otherwise would not have come to light.
- Research Article
64
- 10.1016/j.ijforecast.2017.08.005
- Sep 28, 2017
- International Journal of Forecasting
Some theoretical results on forecast combinations
- Research Article
63
- 10.1016/j.jhydrol.2009.08.028
- Aug 28, 2009
- Journal of Hydrology
Combining single-value streamflow forecasts – A review and guidelines for selecting techniques
- Research Article
- 10.1175/waf-d-24-0248.1
- Aug 1, 2025
- Weather and Forecasting
The Warn-on-Forecast System (WoFS) is a regional, rapidly updating, ensemble data assimilation and prediction system designed to provide short-term probabilistic guidance of severe and hazardous weather, including individual thunderstorms. As with most convection-allowing modeling systems, WoFS occasionally produces forecasts of thunderstorms with storm motion biases, which can be caused by multiple sources of error within the data assimilation and forecast system. The storm motion biases lead to storm displacement errors during the forecasts resulting in increasingly worse forecasts. In this study, we investigate storm displacement errors in WoFS forecasts from cases in 2020–23 using an object-based technique in a novel way to define and match WoFS and Multi-Radar Multi-Sensor (MRMS) reflectivity objects. The storm displacement mean absolute errors and location biases are grouped together by various attributes, including year, lead time, ensemble member, MRMS relative storm age, 850–300-hPa mean wind, and MRMS object mean intensity. Results from this investigation reveal storm displacement errors in WoFS forecasts generally have an eastward bias, grow the fastest within the first hour after forecast initialization, and are the smallest 1–3 h after a thunderstorm has been assimilated. By understanding and characterizing the storm displacement errors, WoFS developers will be able to focus attention on possible error sources and preventative measures to further improve WoFS, and NWS forecasters will be able to mentally account for the storm displacements errors when issuing forecast and warning products. Significance Statement The Warn-on-Forecast System (WoFS) is designed to provide probabilistic guidance of severe and hazardous weather, including individual thunderstorms. While WoFS has been proven to be a successful forecast system, one issue that gets subjectively highlighted by users is the displacement errors in forecast storm locations, especially at later lead times. This study investigates these storm displacement errors in WoFS by exploring different attributes associated with the observed and forecast storms. In general, WoFS forecast storms tend to propagate too fast, resulting in eastward displacements. WoFS developers and NWS forecasters can use this knowledge to improve WoFS and operational forecasts, respectively.
- Research Article
2
- 10.1177/002795010117800110
- Oct 1, 2001
- National Institute Economic Review
The National Institute periodically reviews its forecast performance (Pain and Britton, 1992; Poulizac, Weale and Young, 1996). The structure of this note was prompted by two factors. First of all, the Institute's fore cast for 2001 has turned out to be substantially too opti mistic. Secondly, the IMF (Loungari, 2000) published a study on forecast performance which referred to consen sus or average forecasts of the world economy rather than an individual organisation's forecast of any par ticular country's economy. The study argued:
- Research Article
23
- 10.1175/mwr-d-17-0051.1
- Dec 1, 2017
- Monthly Weather Review
The potential for storm surge to cause extensive property damage and loss of life has increased urgency to more accurately predict coastal flooding associated with landfalling tropical cyclones. This work investigates the sensitivity of coastal inundation from storm tide (surge + tide) to four hurricane parameters—track, intensity, size, and translation speed—and the sensitivity of inundation forecasts to errors in forecasts of those parameters. An ensemble of storm tide simulations is generated for three storms in the Gulf of Mexico, by driving a storm surge model with best track data and systematically generated perturbations of storm parameters from the best track. The spread of the storm perturbations is compared to average errors in recent operational hurricane forecasts, allowing sensitivity results to be interpreted in terms of practical predictability of coastal inundation at different lead times. Two types of inundation metrics are evaluated: point-based statistics and spatially integrated volumes. The practical predictability of surge inundation is found to be limited foremost by current errors in hurricane track forecasts, followed by intensity errors, then speed errors. Errors in storm size can also play an important role in limiting surge predictability at short lead times, due to observational uncertainty. Results show that given current mean errors in hurricane forecasts, location-specific surge inundation is predictable for as little as 12–24 h prior to landfall, less for small-sized storms. The results also indicate potential for increased surge predictability beyond 24 h for large storms by considering a storm-following, volume-integrated metric of inundation.
- Research Article
1
- 10.17016/ifdp.1991.412
- Oct 1, 1991
- International Finance Discussion Paper
Parameter constancy and a model's mean square forecast error are two commonly used measures of forecast performance. By explicit consideration of the information sets involved, this paper clarifies the roles that each plays in analyzing a model's forecast accuracy. Both criteria are necessary for "good" forecast performance, but neither (nor both) is sufficient. Further, these criteria fit into a general taxonomy of model evaluation statistics, and the information set corresponding to a model's mean square forecast error leads to a new test statistic, forecast-model encompassing. Two models of U.K. money demand illustrate the various measures of forecast accuracy.
- Research Article
39
- 10.1002/hyp.9679
- Dec 21, 2012
- Hydrological Processes
Hydrological ensemble prediction systems
- Book Chapter
5
- 10.1201/9780203859759-19
- Aug 20, 2009
Aggregation of randomized model ensemble outcomes for reconstructing nuclear signals from faulty sensors
- Research Article
1
- 10.1175/waf-d-20-0040.1
- Sep 9, 2020
- Weather and Forecasting
This study provides a statistical review on the forecast errors of tropical storm tracks and suggests a Bayesian procedure for updating the uncertainty about the error. The forecast track errors are assumed to form an axisymmetric bivariate normal distribution on a two-dimensional surface. The parameters are a mean vector and a covariance matrix, which imply the accuracy and precision of the operational forecast. A Bayesian method improves quantifying the varying parameters in the bivariate normal distribution. A normal-inverse-Wishart distribution is employed to determine the posterior distribution (i.e., the weights on the parameters). Based on the posterior distribution, the predictive probability density of track forecast errors is obtained as the marginal distribution. Here, “storm approach” is defined for any location within a specified radius of a tropical storm. Consequently, the storm approach probability for each location is derived through partial integration of the marginal distribution within the forecast storm radius. The storm approach probability is considered a realistic and effective representation of storm warning for communicating the threat to local residents since the location-specific interpretation is available on a par with the official track forecast.
- Book Chapter
- 10.1093/oso/9780198774013.003.0012
- Feb 16, 1995
Parameter constancy and a model’s mean square forecast error are two commonly used measures of forecast performance. By explicit consideration of the information sets involved, this paper clarifies the roles that each plays in analyzing a model’s forecast accuracy. Both criteria are necessary for “good” forecast performance, but neither (nor both) is sufficient. Further, these criteria fit into a general
- Research Article
147
- 10.1016/0161-8938(92)90017-7
- Aug 1, 1992
- Journal of Policy Modeling
Parameter constancy, mean square forecast errors, and measuring forecast performance: An exposition, extensions, and illustration
- Research Article
9
- 10.1016/j.jmacro.2018.12.004
- Dec 15, 2018
- Journal of Macroeconomics
Bayesian forecast combination in VAR-DSGE models
- Research Article
- 10.22394/1993-7601-2024-74-124-143
- Jan 1, 2024
- Applied Econometrics
- Research Article
- 10.22394/1993-7601-2024-74-78-103
- Jan 1, 2024
- Applied Econometrics
- Research Article
1
- 10.22394/1993-7601-2024-73-35-58
- Jan 1, 2024
- Applied Econometrics
- Research Article
- 10.22394/1993-7601-2024-76-96-119
- Jan 1, 2024
- Applied Econometrics
- Research Article
1
- 10.22394/1993-7601-2024-73-78-101
- Jan 1, 2024
- Applied Econometrics
- Research Article
- 10.22394/1993-7601-2024-73-119-142
- Jan 1, 2024
- Applied Econometrics
- Research Article
- 10.22394/1993-7601-2024-75-78-97
- Jan 1, 2024
- Applied Econometrics
- Research Article
- 10.22394/1993-7601-2024-73-59-77
- Jan 1, 2024
- Applied Econometrics
- Research Article
1
- 10.22394/1993-7601-2024-74-104-123
- Jan 1, 2024
- Applied Econometrics
- Research Article
- 10.22394/1993-7601-2024-76-5-28
- Jan 1, 2024
- Applied Econometrics
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.