Abstract

This paper is about optimization under uncertainty, when the uncertain parameters are modeled through random variables. Contrary to traditionnal robust approaches which deal with a deterministic problem through a worst-case scenario formulation, stochastic algorithms presented introduce the distribution of the random variables modeling the uncertainty. For mono objective problem such methods are today classical, based on the Robbins-Monro algorithm. When several objectives are involved the optimization problem becomes much harder and the few available methods in the literature are based on genetic approach coupled with Monte-Carlo approaches which are numerically very expensive. We present a new algorithm for solving the expectation formulation of stochastic smooth or nonsmooth multiobjective optimization problems. The proposed method is an extension of the classical stochastic gradient algorithm to multi-objective optimization using the properties of a common descent vector. The mean square and the almost sure convergence of the algorithm are proven. The algorithm effciency is illustrated and assessed on an academic example.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call