Abstract

Scaling Bayesian optimisation (BO) to high-dimensional search spaces is a active and open research problems particularly when no assumptions are made on function structure. The main reason is that at each iteration, BO requires to find global maximisation of acquisition function, which itself is a non-convex optimization problem in the original search space. With growing dimensions, the computational budget for this maximisation gets increasingly short leading to inaccurate solution of the maximisation. This inaccuracy adversely affects both the convergence and the efficiency of BO. We propose a novel approach where the acquisition function only requires maximisation on a discrete set of low dimensional subspaces embedded in the original high-dimensional search space. Our method is free of any low dimensional structure assumption on the function unlike many recent high-dimensional BO methods. Optimising acquisition function in low dimensional subspaces allows our method to obtain accurate solutions within limited computational budget. We show that in spite of this convenience, our algorithm remains convergent. In particular, cumulative regret of our algorithm only grows sub-linearly with the number of iterations. More importantly, as evident from our regret bounds, our algorithm provides a way to trade the convergence rate with the number of subspaces used in the optimisation. Finally, when the number of subspaces is "sufficiently large", our algorithm's cumulative regret is at most O*(√TγT) as opposed to O*(√DTγT) for the GP-UCB of Srinivas et al. (2012), reducing a crucial factor √D where D being the dimensional number of input space. We perform empirical experiments to evaluate our method extensively, showing that its sample efficiency is better than the existing methods for many optimisation problems involving dimensions up to 5000.

Highlights

  • Bayesian optimization (BO) offers an efficient solution to find the global optimum of expensive black-box functions, a problem that is all pervasive in real-world experimental design applications

  • The main difficulty that a BO algorithm faces in high dimensions is that at each iteration, it needs to find the global maximum of a surrogate function called the acquisition function in order to suggest the function evaluation point

  • We propose a scalable Bayesian optimisation to optimise expensive blackbox functions in high dimensions

Read more

Summary

Introduction

Bayesian optimization (BO) offers an efficient solution to find the global optimum of expensive black-box functions, a problem that is all pervasive in real-world experimental design applications. The main difficulty that a BO algorithm faces in high dimensions is that at each iteration, it needs to find the global maximum of a surrogate function called the acquisition function in order to suggest the function evaluation point. The acquisition function itself is a non-convex optimisation problem in the original search space. Any fixed computational budget for this optimisation becomes quickly insufficient leading to inaccurate solutions. This inaccuracy adversely affects both the convergence and the efficiency of the BO algorithm

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call