Abstract

In this paper, we complement a study recently conducted in a paper of H.A. Mombeni, B. Masouri and M.R. Akhoond by introducing five new asymmetric kernel c.d.f. estimators on the half-line [0,∞), namely the Gamma, inverse Gamma, LogNormal, inverse Gaussian and reciprocal inverse Gaussian kernel c.d.f. estimators. For these five new estimators, we prove the asymptotic normality and we find asymptotic expressions for the following quantities: bias, variance, mean squared error and mean integrated squared error. A numerical study then compares the performance of the five new c.d.f. estimators against traditional methods and the Birnbaum–Saunders and Weibull kernel c.d.f. estimators from Mombeni, Masouri and Akhoond. By using the same experimental design, we show that the LogNormal and Birnbaum–Saunders kernel c.d.f. estimators perform the best overall, while the other asymmetric kernel estimators are sometimes better but always at least competitive against the boundary kernel method from C. Tenreiro.

Highlights

  • In this paper, we complement a study recently conducted in a paper of H.A

  • The boundary kernel (BK) c.d.f. estimator would have been the go-to method in the past, but our results show that the total of the ISE mean differences to the best ISE means is more than three times lower for the LN and B–S kernel c.d.f. estimators compared to the BK c.d.f. estimator when n = 256, and it is more than two times lower for the LN and B–S kernel c.d.f. estimators compared to the BK c.d.f. estimator when n = 1000

  • We considered five new asymmetric kernel c.d.f. estimators, namely the Gamma (Gam), inverse Gamma (IGam), LogNormal (LN), inverse Gaussian (IGau) and reciprocal inverse Gaussian (RIG) kernel c.d.f. estimators

Read more

Summary

Introduction

We complement a study recently conducted in a paper of H.A. Mombeni, B. The parameters of the kernel function can vary in a way that makes the mode, the median or the mean equal to x This variable smoothing allows asymmetric kernel estimators to behave better than traditional kernel estimators (see, e.g., Rosenblatt [4], Parzen [5]) near the boundary of the support in terms of their bias. Since the variable smoothing is integrated directly in the parametrization of the kernel function, asymmetric kernel estimators are usually simpler to implement than boundary kernel methods (see, e.g., Gasser and Müller [6], Rice [7], Gasser et al [8], Müller [9], Zhang and Karunamuni [10,11]). The interested reader is referred to Hirukawa [40] and Section 2 of Ouimet and Tolosana-Delgado [36] for a review of some of these papers and an extensive list of papers dealing with asymmetric kernels in other settings

Objectives
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call