Abstract

It may be misleading to estimate value-at-risk (VAR) or other risk measures assuming normally distributed innovations in a model for a heteroscedastic financial return series. Using the t-distribution instead or applying extreme value theory (EVT) have been proposed as possible solutions to this problem. We study the effect on the quality of risk estimators if estimation is based on a normal inverse Gaussian (NIG) distribution fit. When VAR is the risk measure, the NIG based approach is found to be more robust than the EVT method for samples of sizes up to 250 and also in larger samples if the NIG distribution fits, while the EVT method should only be used in large samples if the NIG distribution does not fit adequately. In the case of symmetric distributions, the t-based approach compares well with the NIG based approach. When expected shortfall is the risk measure, the NIG based approach is found to be the clearly preferred method in small samples. Three formal test procedures are proposed to judge the quality of NIG fit. A new parametrization of the NIG distribution together with simple starting values for these when computing the maximum likelihood estimators are also introduced. The procedures are illustrated by analyzing two financial return series.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call