Abstract

Quantile functional linear regression was previously studied using functional principal component analysis. Here we consider the alternative penalized estimator based on the reproducing kernel Hilbert spaces (RKHS) setting. The motivation is that, for the functional linear (mean) regression, it has already been shown in Cai and Yuan (2012) that the approach based on RKHS performs better when the coefficient function does not align well with the eigenfunctions of the covariance kernel. We establish its optimal convergence rate in prediction risk using the Rademacher complexity to bound appropriate empirical processes. Some Monte Carlo studies are carried out for illustration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call