Abstract

We extend the classical approach in supervised classification based on the local likelihood estimation to the functional covariates case. The estimation procedure of the functional parameter (slope parameter) in the linear model when the covariate is of functional kind is investigated. We show, on simulated as well on real data, that classification error rates estimated using test samples, and the estimation procedure by local likelihood seem to lead to better estimators than the classical kernel estimation. In addition, this approach is no longer assuming that the linear predictors have a specific parametric form. However, this approach also has two drawbacks. Indeed, it was more expensive and slower than the kernel regression. Thus, as mentioned earlier, kernels other than the Gaussian kernel can lead to a divergence of the Newton-Raphson algorithm. In contrast, using a Gaussian kernel, 4 to 6 iterations are then sufficient to achieve convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call