Abstract

We introduce the notion of expected hitting time to a goal as a measure of the convergence rate of a Monte Carlo optimization method. The techniques developed apply to simulated annealing, genetic algorithms, and other stochastic search schemes. The expected hitting time can itself be calculated from the more fundamental complementary hitting time distribution (CHTD) which completely characterizes a Monte Carlo method. The CHTD is asymptotically a geometric series, (1/s)/(1 − λ), characterized by two parameters, s, λ, related to the search process in a simple way. The main utility of the CHTD is in comparing Monte Carlo algorithms. In particular we show that independent, identical Monte Carlo algorithms run in parallel, IIP parallelism, and exhibit superlinear speedup. We give conditions under which this occurs and note that equally likely search is linearly sped up. Further we observe that a serial Monte Carlo search can have an infinite expected hitting time, but the same algorithm when parallelized can have a finite expected hitting time. One consequence of the observed superlinear speedup is an improved uniprocessor algorithm by the technique of in-code parallelism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call