Abstract

Passive algorithms for global optimization of a function choose observation points independently of past observed values. We study the average performance of two common passive algorithms, where the average is with respect to a probability on a function space. We consider the case where the probability is on smooth functions, and compare the results to the case where the probability is on non-differentiable functions. The first algorithm chooses equally spaced observation points, while the second algorithm chooses the observation points independently and uniformly distributed. The average convergence rate is derived for both algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call