Abstract

This article modifies the evolutionary policy selection algorithm of Chang et al., which was designed for use in infinite horizon Markov decision processes (MDPs) with a large action space to a discrete stochastic optimization problem, in an algorithm called Evolutionary Policy Iteration-Monte Carlo (EPI-MC). EPI-MC allows EPI to be used in a stochastic combinatorial optimization setting with a finite action space and a noisy cost (value) function by introducing a sampling schedule. Convergence of EPI-MC to the optimal action is proven and experimental results are given.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call