Abstract

Distributed learning is an effective way to analyze big data. In distributed regression, a typical approach is to partition the sample set into m disjoint data subsets of equal size, and then applies the kernel ridge regression algorithm to each sample subset to derive a local estimator, then averages them to get the global estimator. This paper mainly considers distributed regression learning with dependent samples of regularized least squares withα – mixing inputs that is involved in pre-existing literature [15]. Error bound in the K – metric has been derived and a novel error division method has been used to prove the asymptotic convergence for this distributed regularization learning. Learning rate of this algorithm will be obtained under a standard regularity condition on the regression function and the polynomial decay strongly mixing condition. It is proved that distributed learning is applicable to not only the i. i. d. samples but also dependent samples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call