Abstract

In this article we implement the minimum density power divergence estimator (MDPDE) for the shape and scale parameters of the generalized Pareto distribution (GPD). The MDPDE is indexed by a constant α ≥ 0 that controls the trade-off between robustness and efficiency. As α increases, robustness increases and efficiency decreases. For α = 0 the MDPDE is equivalent to the maximum likelihood estimator (MLE). We show that for α > 0 the MDPDE for the GPD has a bounded influence function. For α < 0.2 the MDPDE maintains good asymptotic relative efficiencies, usually above 90%. The results from a Monte Carlo study agree with these asymptotic calculations. The MDPDE is asymptotically normally distributed if the shape parameter is less than (1 + α)/(2 + α), and estimators for standard errors are readily computed under this restriction. We compare the MDPDE, MLE, Dupuis’ optimally-biased robust estimator (OBRE), and Peng and Welsh’s Medians estimator for the parameters. The simulations indicate that the MLE has the highest efficiency under uncontaminated GPDs. However, for the GPD contaminated with gross errors OBRE and MDPDE are more efficient than the MLE. For all the simulated models that we studied the Medians estimator had poor performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call