Abstract

AbstractThe design of very large learning systems presents many unsolved challenges. Consider, for instance, a system that ‘watches’ television for a few weeks and learns to enumerate the objects present in these images. Most current learning algorithms do not scale well enough to handle such massive quantities of data. Experience suggests that the stochastic learning algorithms are best suited to such tasks. This is at first surprising because stochastic learning algorithms optimize the training error rather slowly. Our paper reconsiders the convergence speed in terms of how fast a learning algorithm optimizes the testing error. This reformulation shows the superiority of the well designed stochastic learning algorithm. Copyright © 2005 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call