Abstract

Publisher SummaryIn this chapter, the stochastic approximation procedure has been applied to supervised and nonsupervised learning problems. The problems of estimating parameters, probability measure, and probability density functions have been treated. Mean square error is used as a performance measure of the estimation procedures. Relationships between Bayesian estimation and stochastic approximation have been discussed. In some cases, the Bayesian learning (estimation) algorithms have been shown to fall into the framework of Dvoretzky's general stochastic approximation procedure. Consequently, the convergence in mean square sense and with probability 1 is guaranteed. The mixture formulation is again used for nonsupervised learning. The stochastic approximation procedure is employed to estimate the unknown parameters in a mixture distribution (or density) function. Dynamic stochastic approximation is applied to the learning (estimation) of slowly time varying parameters. The procedure in general consists of a two-step approximation to be performed at each stage of the learning process. The first step is designed to correct the time-varying trend of the parameters being learned; the second step is made by means of an ordinary stochastic approximation procedure.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.