Abstract

This paper addresses the problem of optimizing a function over a finite or countably infinite set of alternatives, in situations where this objective function cannot be evaluated exactly, but has to be estimated or measured. A special focus is on situations where simulation is used to evaluate the objective function. We present two versions of a new iterative method for solving such discrete stochastic optimization problems. In each iteration of the proposed method, a neighbor of the “current” alternative is selected, and estimates of the objective function evaluated at the current and neighboring alternatives are compared. The alternative that has a better observed function value becomes the next current alternative. We show how one version of the proposed method can be used to solve discrete optimization problems where the objective function is evaluated using transient or steady-state simulation, and we show how the other version can be applied to solve a special class of discrete stochastic optimization problems and present some numerical results. A major strength of the proposed method is that it spends most of the computational effort at local minimizers of the objective function. In fact, we show that for both versions of the proposed method, the alternative that has been visited most often in the first m iterations converges almost surely to a local optimizer of the objective function as m goes to infinity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call