Abstract

For the problem of dividing the space originally partitioned by a blurred boundary, every learning algorithm can make the probability of incorrect prediction of an individual example ε decrease with the number of training examples t. We address here the question of how the asymptotic form of ε(t) as well as its limit of convergence reflect the choice of learning algorithms. The error minimum algorithm is found to exhibit rather slow convergence of ε(t) to its lower bound ε0, ε(t) - ε0 ∼ O(t-2/3). Even for the purpose of minimizing prediction error, the maximum likelihood algorithm can be utilized as an alternative. If the true probability distribution happens to be contained in the family of hypothetical functions, then the boundary estimated from the hypothetical distribution function eventually converges to the best choice. Convergence of the prediction error is then ε(t) - ε0 ∼ O(t-1). If the true distribution is not available from the algorithm, however, the boundary generally does not converge to the best choice, but instead ε(t) - ε1 ∼ ±O(t-1/2), where ε1 > ε0 > 0.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.