Abstract

For finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. Their application to infinite-dimensional problems is less understood, particularly for nonconvex objectives. This paper presents convergence results for the stochastic proximal gradient method applied to Hilbert spaces, motivated by optimization problems with partial differential equation (PDE) constraints with random inputs and coefficients. We study stochastic algorithms for nonconvex and nonsmooth problems, where the nonsmooth part is convex and the nonconvex part is the expectation, which is assumed to have a Lipschitz continuous gradient. The optimization variable is an element of a Hilbert space. We show almost sure convergence of strong limit points of the random sequence generated by the algorithm to stationary points. We demonstrate the stochastic proximal gradient algorithm on a tracking-type functional with a L^1-penalty term constrained by a semilinear PDE and box constraints, where input terms and coefficients are subject to uncertainty. We verify conditions for ensuring convergence of the algorithm and show a simulation.

Highlights

  • We focus on stochastic approximation methods for solving a stochastic optimization problem on a Hilbert space H of the form min{f (u) = j(u) + h(u)}, u∈H

  • Our work is motivated by applications to partial differential equation (PDE)-constrained optimization under uncertainty, where a nonlinear PDE constraint can lead to an objective function that is nonconvex with respect to the Hilbert-valued variable

  • We presented asymptotic convergence analysis for two variants of the stochastic proximal gradient algorithm in Hilbert spaces

Read more

Summary

Introduction

The authors of [25] used this idea to solve a regression problem using finite differences subject to noise Algorithms of this kind, with bias in addition to stochastic noise, are sometimes called stochastic quasi-gradient methods; see, e.g., sizes tn of the f[o1r7m, 5∑3]∞n.=1Btans=ic. There have been a number of contributions with proofs of convergence of the stochastic gradient method for unconstrained nonconvex problems; see [6, 7, 49, 56]. Applications of the stochastic gradient method to PDEconstrained optimization have already been explored by [19, 37] In these works, convexity of the objective function is assumed, leaving the question of convergence in the more general case entirely open.

Notation and background
Variance‐reduced stochastic proximal gradient method
Stochastic proximal gradient method: decreasing step sizes
Application to PDE‐constrained optimization under uncertainty
Model problem
Numerical experiments
Conclusion
Objective function fN
A Auxiliary results
B Auxiliary proofs for application
C Differentiability of expectation functionals
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call