A Simple Method for Predicting Covariance Matrices of Financial Returns
We consider the well-studied problem of predicting the timevarying covariance matrix of a vector of financial returns. Popular methods range from simple predictors like rolling window or exponentially weighted moving average (EWMA) to more sophisticated predictors such as generalized autoregressive conditional heteroscedastic (GARCH) type methods. Building on a specific covariance estimator suggested by Engle in 2002, we propose a relatively simple extension that requires little or no tuning or fitting, is interpretable, and produces results at least as good as MGARCH, a popular extension of GARCH that handles multiple assets. To evaluate predictors we introduce a novel approach, evaluating the regret of the log-likelihood over a time period such as a quarter. This metric allows us to see not only how well a covariance predictor does over all, but also how quickly it reacts to changes in market conditions. Our simple predictor outperforms MGARCH in terms of regret. We also test covariance predictors on downstream applications such as portfolio optimization methods that depend on the covariance matrix. For these applications our simple covariance predictor and MGARCH perform similarly. This monograph makes three contributions. First, we propose a new method for predicting the time-varying covariance matrix of a vector of financial returns, building on a specific covariance estimator suggested by Engle in 2002. Our method is a relatively simple extension that
- Research Article
2
- 10.1103/physrevd.103.103535
- May 28, 2021
- Physical Review D
Covariance matrices are among the most difficult pieces of end-to-end cosmological analyses. In principle, for two-point functions, each component involves a four-point function, and the resulting covariance often has hundreds of thousands of elements. We investigate various compression mechanisms capable of vastly reducing the size of the covariance matrix in the context of cosmic shear statistics. This helps identify which of its parts are most crucial to parameter estimation. We start with simple compression methods, by isolating and "removing" 200 modes associated with the lowest eigenvalues, then those with the lowest signal-to-noise ratio, before moving on to more sophisticated schemes like compression at the tomographic level and, finally, with the Massively Optimized Parameter Estimation and Data compression (MOPED). We find that, while most of these approaches prove useful for a few parameters of interest, like $\Omega_m$, the simplest yield a loss of constraining power on the intrinsic alignment (IA) parameters as well as $S_8$. For the case considered -- cosmic shear from the first year of data from the Dark Energy Survey -- only MOPED was able to replicate the original constraints in the 16-parameter space. Finally, we apply a tolerance test to the elements of the compressed covariance matrix obtained with MOPED and confirm that the IA parameter $A_{\mathrm{IA}}$ is the most susceptible to inaccuracies in the covariance matrix.
- Research Article
18
- 10.24148/wp2000-21
- Apr 1, 2000
- Federal Reserve Bank of San Francisco, Working Paper Series
Covariance matrix forecasts of financial asset returns are an important component of current practice in financial risk management. A wide variety of models, ranging from matrices of simple summary measures to covariance matrices implied from option prices, are available for generating such forecasts. In this paper, we evaluate the relative accuracy of different covariance matrix forecasts using standard statistical loss functions and a value-at-risk (VaR) framework. This framework consists of hypothesis tests examining various properties of VaR models based on these forecasts as well as an evaluation using a regulatory loss function. ; Using a foreign exchange portfolio, we find that implied covariance matrix forecasts appear to perform best under standard statistical loss functions. However, within the economic context of a VaR framework, the performance of VaR models depends more on their distributional assumptions than on their covariance matrix specification. Of the forecasts examined, simple specifications, such as exponentially-weighted moving averages of past observations perform best with regard to the magnitude of VaR exceptions and regulatory capital requirements. These results provide empirical support for the commonly-used VaR models based on simple covariance matrix forecasts and distributional assumptions.
- Research Article
10
- 10.2139/ssrn.305279
- Mar 28, 2002
- SSRN Electronic Journal
Covariance matrix forecasts of financial asset returns are an important component of current practice in financial risk management. A wide variety of models, ranging from matrices of simple summary measures to covariance matrices implied from option prices, are available for generating such forecasts. In this paper, we evaluate the relative accuracy of different covariance matrix forecasts using standard statistical loss functions and a value-at-risk (VaR) framework. This framework consists of hypothesis tests examining various properties of VaR models based on these forecasts as well as an evaluation using a regulatory loss function. Using a foreign exchange portfolio, we find that implied covariance matrix forecasts appear to perform best under standard statistical loss functions. However, within the economic context of a VaR framework, the performance of VaR models depends more on their distributional assumptions than on their covariance matrix specification. Of the forecasts examined, simple specifications, such as exponentially-weighted moving averages of past observations, perform best with regard to the magnitude of VaR exceptions and regulatory capital requirements. These results provide empirical support for the commonly-used VaR models based on simple covariance matrix forecasts and distributional assumptions.
- Research Article
20
- 10.1080/01621459.2016.1148608
- Jan 2, 2017
- Journal of the American Statistical Association
ABSTRACTMotivated from a changing market environment over time, we consider high-dimensional data such as financial returns, generated by a hidden Markov model that allows for switching between different regimes or states. To get more stable estimates of the covariance matrices of the different states, potentially driven by a number of observations that are small compared to the dimension, we modify the expectation–maximization (EM) algorithm so that it yields the shrinkage estimators for the covariance matrices. The final algorithm turns out to reproduce better estimates not only for the covariance matrices but also for the transition matrix. It results into a more stable and reliable filter that allows for reconstructing the values of the hidden Markov chain. In addition to a simulation study performed in this article, we also present a series of theoretical results that include dimensionality asymptotics and provide the motivation for certain techniques used in the algorithm. Supplementary materials for this article are available online.
- Research Article
- 10.47654/v22y2018i1p369-404
- Jan 1, 2018
- Advances in Decision Sciences
Modeling financial returns is challenging because the correlations and variance of returns are time-varying and the covariance matrices can be quite high-dimensional. In this paper, we develop a Bayesian shrinkage approach with modified Cholesky decomposition to model correlations between financial returns. We reparameterize the correlation parameters to meet their positive definite constraint for Bayesian analysis. To implement an efficient sampling scheme in posterior inference, hierarchical representation of Bayesian lasso is used to shrink unknown coefficients in linear regressions. Simulation results show good sampling properties that iterates from Markov chain Monte Carlo converge quickly. Using a real data example, we illustrate the application of the proposed Bayesian shrinkage method in modeling stock returns in Hong Kong.
- Research Article
126
- 10.1109/tip.2016.2553446
- Apr 12, 2016
- IEEE Transactions on Image Processing
Person re-identification aims to match the images of pedestrians across different camera views from different locations. This is a challenging intelligent video surveillance problem that remains an active area of research due to the need for performance improvement. Person re-identification involves two main steps: feature representation and metric learning. Although the keep it simple and straightforward (KISS) metric learning method for discriminative distance metric learning has been shown to be effective for the person re-identification, the estimation of the inverse of a covariance matrix is unstable and indeed may not exist when the training set is small, resulting in poor performance. Here, we present dual-regularized KISS (DR-KISS) metric learning. By regularizing the two covariance matrices, DR-KISS improves on KISS by reducing overestimation of large eigenvalues of the two estimated covariance matrices and, in doing so, guarantees that the covariance matrix is irreversible. Furthermore, we provide theoretical analyses for supporting the motivations. Specifically, we first prove why the regularization is necessary. Then, we prove that the proposed method is robust for generalization. We conduct extensive experiments on three challenging person re-identification datasets, VIPeR, GRID, and CUHK 01, and show that DR-KISS achieves new state-of-the-art performance.
- Conference Article
3
- 10.1109/aipr.2011.6176368
- Oct 1, 2011
Estimation of covariance matrices is a fundamental step in hyperspectral remote sensing where most detection algorithms make use of the covariance matrix in whitening procedures. We present a simple method to improve the estimation of the eigenvalues of a sample covariance matrix. With the improved eigenvalues we construct an improved covariance matrix. Our method is based on the Marcenko-Pastur law, theory of eigenvalue bounds, and energy conservation. Our objective is to add a new method for estimating the eigenvalues of Wishart covariance matrices in scenarios where the sample size is small. Our method is simple, practical and easy to implement (it consists of a multiplication of 3 matrices). We did our study with extensive simulations and a few examples of hyperspectral remote sensing data that were measured in the long infrared wavelength region (8-12μm). We show examples of the improved eigenvalues over the sampled eigenvalues. We choose the following five figures-of-merit for evaluating our method as ratios of properties between sampled data and our solution: (i) residual (rms) that gives the improvement of the solution over the sampled data with respect to the population eigenvalues, (ii) area under the scree-plot, (iii) condition number that gives the improvement in the stability (regularization), (iv) a distance-measure that gives the average statistical improvement between the improved and the sampled eigenvalues, and (v) Kullback-Leibler distance. We show hyperspectral matched-filter detection performance (ROC curves) for TELOPS data where we use our improved covariance matrix. We compare the improved ROC to the one that are obtained with sampled (data) covariance matrix.
- Research Article
39
- 10.1007/bf01201025
- Mar 1, 1994
- Journal of Classification
In multivariate discrimination of several normal populations, the optimal classification procedure is based on quadratic discriminant functions. We compare expected error rates of the quadratic classification procedure if the covariance matrices are estimated under the following four models: (i) arbitrary covariance matrices, (ii) common principal components, (iii) proportional covariance matrices, and (iv) identical covariance matrices. Using Monte Carlo simulation to estimate expected error rates, we study the performance of the four discrimination procedures for five different parameter setups corresponding to “standard” situations that have been used in the literature. The procedures are examined for sample sizes ranging from 10 to 60, and for two to four groups. Our results quantify the extent to which a parsimonious method reduces error rates, and demonstrate that choosing a simple method of discrimination is often beneficial even if the underlying model assumptions are wrong.
- Research Article
240
- 10.2139/ssrn.267792
- Jan 1, 2001
- SSRN Electronic Journal
This paper provides a general framework for integration of high-frequency intraday data into the measurement, modeling, and forecasting of daily and lower frequency volatility and return distributions. Most procedures for modeling and forecasting financial asset return volatilities, correlations, and distributions rely on restrictive and complicated parametric multivariate ARCH or stochastic volatility models, which often perform poorly at intraday frequencies. Use of realized volatility constructed from high-frequency intraday returns, in contrast, permits the use of traditional time series procedures for modeling and forecasting. Building on the theory of continuous-time arbitrage-free price processes and the theory of quadratic variation, we formally develop the links between the conditional covariance matrix and the concept of realized volatility. Next, using continuously recorded observations for the Deutschemark / Dollar and Yen / Dollar spot exchange rates covering more than a decade, we find that forecasts from a simple long-memory Gaussian vector autoregression for the logarithmic daily realized volatilities perform admirably compared to popular daily ARCH and related models. Moreover, the vector autoregressive volatility forecast, coupled with a parametric lognormal-normal mixture distribution implied by the theoretically and empirically grounded assumption of normally distributed standardized returns, gives rise to well-calibrated density forecasts of future returns, and correspondingly accurate quantile estimates. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing, asset allocation and financial risk management applications.
- Research Article
3629
- 10.1111/1468-0262.00418
- Mar 1, 2003
- Econometrica
This paper provides a general framework for integration of high-frequency intraday data into the measurement, modeling, and forecasting of daily and lower frequency volatility and return distributions. Most procedures for modeling and forecasting financial asset return volatilities, correlations, and distributions rely on restrictive and complicated parametric multivariate ARCH or stochastic volatility models, which often perform poorly at intraday frequencies. Use of realized volatility constructed from high-frequency intraday returns, in contrast, permits the use of traditional time series procedures for modeling and forecasting. Building on the theory of continuous-time arbitrage-free price processes and the theory of quadratic variation, we formally develop the links between the conditional covariance matrix and the concept of realized volatility. Next, using continuously recorded observations for the Deutschemark / Dollar and Yen / Dollar spot exchange rates covering more than a decade, we find that forecasts from a simple long-memory Gaussian vector autoregression for the logarithmic daily realized volatilities perform admirably compared to popular daily ARCH and related models. Moreover, the vector autoregressive volatility forecast, coupled with a parametric lognormal-normal mixture distribution implied by the theoretically and empirically grounded assumption of normally distributed standardized returns, gives rise to well-calibrated density forecasts of future returns, and correspondingly accurate quantile estimates. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing, asset allocation and financial risk management applications.
- Single Report
59
- 10.3386/w8160
- Mar 1, 2001
This paper provides a general framework for integration of high-frequency intraday data into the measurement, modeling and forecasting of daily and lower frequency volatility and return distributions. Most procedures for modeling and forecasting financial asset return volatilities, correlations, and distributions rely on restrictive and complicated parametric multivariate ARCH or stochastic volatility models, which often perform poorly at intraday frequencies. Use of realized volatility constructed from high-frequency intraday returns, in contrast, permits the use of traditional time series procedures for modeling and forecasting. Building on the theory of continuous-time arbitrage-free price processes and the theory of quadratic variation, we formally develop the links between the conditional covariance matrix and the concept of realized volatility. Next, using continuously recorded observations for the Deutschemark/Dollar and Yen /Dollar spot exchange rates covering more than a decade, we find that forecasts from a simple long-memory Gaussian vector autoregression for the logarithmic daily realized volatitilies perform admirably compared to popular daily ARCH and related models. Moreover, the vector autoregressive volatility forecast, coupled with a parametric lognormal-normal mixture distribution implied by the theoretically and empirically grounded assumption of normally distributed standardized returns, gives rise to well-calibrated density forecasts of future returns, and correspondingly accurate quintile estimates. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing, asset allocation and financial risk management applications.
- Research Article
35
- 10.1016/j.physa.2008.02.045
- Mar 4, 2008
- Physica A: Statistical Mechanics and its Applications
Random matrix theory filters in portfolio optimisation: A stability and risk assessment
- Research Article
64
- 10.1016/j.rse.2017.08.020
- Aug 30, 2017
- Remote Sensing of Environment
Hyperspectral remote sensing of shallow waters: Considering environmental noise and bottom intra-class variability for modeling and inversion of water reflectance
- Research Article
- 10.24191/apmaj.v17i2-09
- Aug 31, 2022
- Asia-Pacific Management Accounting Journal
Increasing awareness for sustainability has led more firms to incorporate environmental, social, and governance (ESG) reporting into their establishments. However, firms’ ESG information are beset with issues of information asymmetry. Hence, this study was set to provide a more uniformed statistical method of measuring firm’s social performance (SP); named ESG efficiency. The Data Envelopment Analysis (DEA) model was used to measure the ESG efficiency of firms in giving back to the masses by means of ESG contribution. Malaysia, Thailand, Singapore, and Indonesia were selected for the study from the year 2010 to 2019. The findings were relatively consistent for all selected countries. The study found that ESG efficiency of both ESG and non-ESG firms had fluctuated, with ESG firms’ ESG efficiency fluctuating on an increasing trend. ESG firms were found to be more efficient in giving back to the masses, with higher mean technical efficiency (TE), compared to non-ESG firms. Furthermore, Pure Technical Inefficiency (PTIE) was identified as a significant factor in influencing a firm’s TE, indicating that the firms were purely managerial inefficient in directing their financial returns toward ESG contribution. Subsequently, this study provides a simpler method in SP measurement, and was able to identify the significant factor that affected firms’ ESG efficiency, and extends the literature on East Asia’s ESG development. Keywords: ESG, data envelopment analysis, efficiency
- Research Article
6
- 10.1016/j.jspi.2019.11.003
- Jan 9, 2020
- Journal of Statistical Planning and Inference
Projected tests for high-dimensional covariance matrices
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.