Abstract

We consider Nadaraya–Watson type estimators for binary regression functions. We propose a method for improving the performance of such estimators by employing bias reduction techniques when estimating the constituent probability densities. Direct substitution of separately optimized density estimates into the regression function formula generates disappointing results in practice. However, adjusting the global smoothing parameter to optimize a performance criterion for the binary regression function itself is more promising. We focus on an implementation of this approach which uses a variable kernel technique to provide reduced bias density estimates, and where the global bandwidth is selected by an appropriately tailored leave-one-out (cross-validation) method. Theory and numerical experiments show that this form of bias reduction improves performance substantially when the underlying regression function is highly non-linear but is not beneficial when the underlying regression function is almost linear in form.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call