Abstract

Passive algorithms for global optimization of a function choose observation points independently of past observed values. We study the average performance of two common passive algorithms under the assumption of Brownian motion prior. The first algorithm chooses equally spaced observation points, while the second algorithm chooses the observation points independently and uniformly distributed. The average convergence rate for both is O(n−1/2), with the second algorithm approximately 82% as efficient as the first.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call