Abstract

The purpose of this paper is to study the dynamical behavior of the sequence produced by a Forward---Backward algorithm, involving two random maximal monotone operators and a sequence of decreasing step sizes. Defining a mean monotone operator as an Aumann integral and assuming that the sum of the two mean operators is maximal (sufficient maximality conditions are provided), it is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudotrajectory in the sense of Benaim and Hirsch of the differential inclusion involving the sum of the mean operators. The convergence of the empirical means of the iterates toward a zero of the sum of the mean operators is shown, as well as the convergence of the sequence itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization problems or variational inequalities in random environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.