Abstract

In financial practices and research studies, we often encounter a large number of assets. The availability of high-frequency financial data makes it possible to estimate the large volatility matrix of these assets. Existing volatility matrix estimators such as kernel realized volatility and pre-averaging realized volatility perform poorly when the number of assets is very large, and in fact they are inconsistent when the number of assets and sample size go to infinity. In this paper, we introduce threshold rules to regularize kernel realized volatility, pre-averaging realized volatility, and multi-scale realized volatility. We establish asymptotic theory for these threshold estimators in the framework that allows the number of assets and sample size to go to infinity. Their convergence rates are derived under sparsity on the large integrated volatility matrix. In particular we have shown that the threshold kernel realized volatility and threshold pre-averaging realized volatility can achieve the optimal rate with respect to the sample size through proper bias corrections, but the bias adjustments cause the estimators to lose positive semi-definiteness; on the other hand, in order to be positive semi-definite, the threshold kernel realized volatility and threshold pre-averaging realized volatility have slower convergence rates with respect to the sample size. A simulation study is conducted to check the finite sample performances of the proposed threshold estimators with over hundred assets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call