Abstract

Conditional heteroscedastic (CH) models are routinely used to analyze financial datasets. The classical models such as ARCH-GARCH with time-invariant coefficients are often inadequate to describe frequent changes over time due to market variability. However, we can achieve significantly better insight by considering the time-varying analogs of these models. In this paper, we propose a Bayesian approach to the estimation of such models and develop a computationally efficient MCMC algorithm based on Hamiltonian Monte Carlo (HMC) sampling. We also established posterior contraction rates with increasing sample size in terms of the average Hellinger metric. The performance of our method is compared with frequentist estimates and estimates from the time constant analogs. To conclude the paper we obtain time-varying parameter estimates for some popular Forex (currency conversion rate) and stock market datasets.

Highlights

  • For datasets observed over a long period, stationarity turns out to be an oversimplified assumption that ignores systematic deviations of parameters from constancy

  • We develop a Bayesian estimation method for time-varying analogs of AutoRegressive Conditional Heteroscedasticity (ARCH), Generalized ARCH (GARCH), and integrated GARCH (iGARCH) models

  • Strong motivation towards implementing such a Bayesian methodology was to circumvent the requirement of a huge sample size which is almost essential for effective estimation using the frequentist and kernel-based methods

Read more

Summary

Introduction

For datasets observed over a long period, stationarity turns out to be an oversimplified assumption that ignores systematic deviations of parameters from constancy. The recursive definition of the models and a subsequent kernel-based method of estimating make it difficult to achieve satisfying results for relatively smaller sample sizes. We develop a Bayesian estimation method for time-varying analogs of ARCH, GARCH, and iGARCH models. Strong motivation towards implementing such a Bayesian methodology was to circumvent the requirement of a huge sample size which is almost essential for effective estimation using the frequentist and kernel-based methods This requirement on sample size has been frequently pointed out in the literature of ARCH/GARCH models and this was one of our main motivations to see if a reasonable estimation scheme can be designed in a Bayesian way. The supplementary materials Karmakar and Roy (2021) contain theoretical proofs and some additional results

Modeling
Posterior Computation and Implementation
Large-Sample Properties
Simulation
Real Data Application
Model Comparison
Findings
Discussion and Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call