Abstract

An improvement to the classic adaptive kernel estimator has been made by incorporating first order dynamics in a neural network framework that results in a fully self-consistent probability density function (pdf) estimate. The dynamics give rise to nonlinear interactions between the kernel parameters, resulting in a self-consistent pdf estimate. This is in contrast to the adaptive kernel estimator which is a simple three step procedure. Adaptive kernel estimates have asymptotic convergence rates of O(h/sup 4/) if the errors involved in the pilot estimate can be ignored. This is compared to standard kernel estimators which converge as O(h/sup 2/). By using a fully self-consistent method, this approach is also able to approach the theoretical O(h/sup 4/) convergence rate while providing smoother estimates of the distribution tails than the adaptive kernel estimator. A one-dimensional application to the estimation of a log-normal distribution is included as an example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call