Abstract

Solving a stochastic optimization problem often involves performing repeated noisy function evaluations at points encountered during the algorithm. Recently, a continuous optimization framework for executing a single observation per search point was shown to exhibit a martingale property so that associated estimation errors are guaranteed to converge to zero. We generalize this martingale single observation approach to problems with mixed discrete–continuous variables. We establish mild regularity conditions for this class of algorithms to converge to a global optimum.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call