Abstract
A standard assumption in the literature of learning theory is the samples which are drawn independently from an identical distribution with a uniform bounded output. This excludes the common case with Gaussian distribution. In this paper we extend these assumptions to a general case. To be precise, samples are drawn from a sequence of unbounded and non-identical probability distributions. By drift error analysis and Bennett inequality for the unbounded random variables, we derive a satisfactory learning rate for the ERM algorithm.
Highlights
In learning theory we study the problem of looking for a function or its approximation which reflects the relationship between the input and the output via samples
It can be considered as a mathematical analysis of artificial intelligence or machine learning
A typical setting of learning theory in mathematics can be like this: the input space X is a compact metric space, and the output space Y ⊂ for regression
Summary
In learning theory we study the problem of looking for a function or its approximation which reflects the relationship between the input and the output via samples. (2016) Error Analysis of ERM Algorithm with Unbounded and Non-Identical Sampling. We extend the polynomial convergence condition on the conditional distribution sequense and set the moment inremental condition for the sequence in the least squares ERM algorithm. { } distribution sequence ρ(i) ( y | x) is a polynomially convergence sequence, but not identical as in their i≥1 settings This together with unbounded y lead to the main difficulty for the error analysis in this paper. [17] shows that under some conditions on kernel, object function fρ , exponential convergence condition for distribution sequence and choose some special parameters, the optimal rate of online learning algorithm is close to ( ) Op (1 m) . We will bound the two types of errors respectively and obtain the total error bounds
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.