Abstract

We propose elimination-based algorithms that work for any \textit{selection-of-the-best} problem with convex structure, with the guarantee of finding a near-optimal decision with high probability given any precision level. The proposed algorithms require no knowledge about the suboptimality gap, which is often hard to acquire in real-world applications a priori. Specifically, an adaptive sampling algorithm is developed whose worst-case sample complexity is proven to be logarithmically dependent on the problem scale, which improves upon the linear dependence for general selection problems. In addition, a uniform sampling algorithm is proposed based on a novel elimination criterion where the worst-case sample complexity is asymptotically independent of the problem scale, which matches the best achievable performance up to a constant. We show that the sample complexity and optimality results still hold in the generalized problem of selecting the best $m$ decisions and for the instance-dependent analysis. Our theoretical findings are illustrated by numerical experiments on both synthetic and real-world examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call