Abstract
We discuss here two questions related to the convergence of a class of iterative processes to find the minimum point of convex functionals. The iterative process is first viewed as arising from a sequence of contraction mappings whose contraction constants approach one. The rate of convergence of the process is then discussed in terms of these constants. We then study the convergence of gradient-type methods when they are subject to random errors. Sufficient conditions are obtained for various types of probabilistic convergence.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have