Abstract

The theory of large deviations is applied to the study of the asymptotic properties of the stochastic approximation algorithms (1.1) and (1.2). The method provides a useful alternative to the currently used technique of obtaining rate of convergence results by studying the sequence {(Xn-.)/θan} (for (1.1)), where θ is a 'stable' point of the algorithm. Let G be a bounded neighborhood of θ, which is in the domain of attraction of θ for the 'limit ODE'. The process xn(θ) is defined as a 'natural interpolation' of {Xj,j≥n} with xn(0) = Xn, and interpolation intervals {aj,j≥n}. Define τG n = min{t:xn(t)τG}. Then it is shown (among other things) that Px{τG n ≥ T} ~ exp-nqV, where q depends on {an,cn}, and V depends on the b(.) cov τn, and G. Such estimates imply that the asymptotic behavior is much better than suggested by the 'local linearization methods', and they yield much new insight into the asymptotic behavior. The technique is applicable to related problems in the asymptotic analysis of recursive algorithms, and requires weaker conditions on the dynamics than do the 'linearization methods'. The necessary basic background is provided and the optimal control problems associated with getting the V above are derived.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.