Abstract

The a-level Conditional Tail Expectation (CTE) of a continuous random variable X is defined as its conditional expectation given the event {X > qα} where qα represents its α-level quantile. It is well known that the empirical CTE (the average of the n(1 – a) largest order statistics in a sample of size n) is a negatively biased estimator of the CTE. This bias vanishes as the sample size increases but in small samples can be significant. Hence the need for bias correction. Although the bootstrap method has been suggested for correcting the bias of the empirical CTE, recent research shows that alternate kernel-based methods of bias correction perform better in some practical examples. To further understand this phenomenon, we conduct an asymptotic analysis of the exact bootstrap bias correction for the empirical CTE, focusing on its performance as a point estimator of the bias of the empirical CTE. We provide heuristics suggesting that the exact bootstrap bias correction is approximately a kernel-based estimator, albeit using a bandwidth that converges to zero faster than mean square optimal bandwidths. This approximation provides some insight into why the bootstrap method has markedly less residual bias, but at the cost of having higher variance. We prove a central limit theorem (CLT) for the exact bootstrap bias correction using an alternate representation as an /1 distance of the sample observations from the α-level empirical quantile. The CLT, in particular, shows that the bootstrap bias correction has a relative error of n -¼. In contrast, for any given ε > 0, and under the assumption that the sampling density is sufficiently smooth, relative error of order O(n -½+ε) is attainable using kernel-based estimators. Thus, in an asymptotic sense, the bootstrap bias correction as a point estimator of the bias is not optimal in the case of smooth sampling densities. Bootstrapped risk measures have recently found interest as estimators in their own right; as an application we derive the CLT for the bootstrap expectation of the empirical CTE. We also report on a simulation study of the effect of small sample sizes on the quality of the approximation provided by the CLT. In support of the bootstrap method we show that the bootstrap bias correction is optimal if the sampling density is constrained only to be Lipschitz of order 1/2 (or, loosely speaking, to have only half a derivative). Because in practice densities are at least twice differentiable, this optimality result largely fails to make the bootstrap method attractive to practitioners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call