A kernel smoother is an intuitive estimate of a regression function or conditional expectation; at each point $x_0$ the estimate of $E(Y\mid x_0)$ is a weighted mean of the sample $Y_i$, with observations close to $x_0$ receiving the largest weights. Unfortunately this simplicity has flaws. At the boundary of the predictor space, the kernel neighborhood is asymmetric and the estimate may have substantial bias. Bias can be a problem in the interior as well if the predictors are nonuniform or if the regression function has substantial curvature. These problems are particularly severe when the predictors are multidimensional. A variety of kernel modifications have been proposed to provide approximate and asymptotic adjustment for these biases. Such methods generally place substantial restrictions on the regression problems that can be considered; in unfavorable situations, they can perform very poorly. Moreover, the necessary modifications are very difficult to implement in the multidimensional case. Local regression smoothers fit lower-order polynomials in $x$ locally at $x_0$, and the estimate of $f(x_0)$ is taken from the fitted polynomial at $x_0$. They automatically, intuitively and simultaneously adjust for both the biases above to the given order and generalize naturally to the multidimensional case. They also provide natural estimates for the derivatives of $f$, an approach more attractive than using higher-order kernel functions for the same purpose.
Read full abstract