Abstract

Random search algorithms are very useful for simulation optimization, because they are relatively easy to implement and typically find a “good” solution quickly. One drawback is that strong convergence results to a global optimum require strong assumptions on the structure of the problem. This chapter begins by discussing optimization formulations for simulation optimization that combines expected performance with a measure of variability, or risk. It then summarizes theoretical results for several adaptive random search algorithms (including pure adaptive search, hesitant adaptive search, backtracking adaptive search and annealing adaptive search) that converge in probability to a global optimum on ill-structured problems. More importantly, the complexity of these adaptive random search algorithms is linear in dimension, on average. While it is not possible to exactly implement stochastic adaptive search with the ideal linear performance, this chapter describes several algorithms utilizing a Markov chain Monte Carlo sampler known as hit-and-run that approximate stochastic adaptive search. The first optimization algorithm discussed that uses hit-and-run is called improving hit-and-run, and it has polynomial complexity, on average, for a class of convex problems. Then a simulated annealing algorithm and a population based algorithm, both using hit-and-run as the candidate point generator, are described. A variation to hit-and-run that can handle mixed continuous/integer feasible regions, called pattern hit-and-run, is described. Pattern hit-and-run retains the same convergence results to a target distribution as hit-and-run on continuous domains. This broadly extends the class of the optimization problems for these algorithms to mixed continuous/integer feasible regions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call