Abstract

We present two new variants of the stochastic ruler method for solving discrete stochastic optimization problems. These two variants use the same mechanism for moving around the state space as the modified stochastic ruler method we have proposed earlier. However, the new variants use different approaches for estimating the optimal solution. In particular, the modified stochastic ruler method uses the number of visits to each state by the Markov chain generated by the algorithm to estimate the optimal solution. On the other hand, one of our new methods uses the number of visits to each state by the embedded chain of the Markov chain generated by the algorithm to estimate the optimal solution, and our other new method uses the feasible solution with the best average estimated objective function value to estimate the optimal solution. Like our earlier modification of the stochastic ruler method, these two new methods are guaranteed to converge almost surely to the set of global optimal solutions. We present theoretical and numerical results that indicate that our new approaches tend to lead to the set of global optimal solutions faster.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call