Abstract

Bayesian optimization (BO) is a powerful approach for seeking the global optimum of expensive black-box functions and has proven successful for fine tuning hyper-parameters of machine learning models. However, BO is practically limited to optimizing 10–20 parameters. To scale BO to high dimensions, we usually make structural assumptions on the decomposition of the objective and/or exploit the intrinsic lower dimensionality of the problem, e.g. by using linear projections. We could achieve a higher compression rate with nonlinear projections, but learning these nonlinear embeddings typically requires much data. This contradicts the BO objective of a relatively small evaluation budget. To address this challenge, we propose to learn a low-dimensional feature space jointly with (a) the response surface and (b) a reconstruction mapping. Our approach allows for optimization of BO’s acquisition function in the lower-dimensional subspace, which significantly simplifies the optimization problem. We reconstruct the original parameter space from the lower-dimensional subspace for evaluating the black-box function. For meaningful exploration, we solve a constrained optimization problem.

Highlights

  • Bayesian optimization (BO) is a useful model-based approach to global optimization of black-box functions, which are expensive to evaluate (Kushner 1964; Jones et al 1998)

  • The standard BO routine consists of two key steps: (1) estimating the black-box function from data through a probabilistic surrogate model, usually a Gaussian process (GP), referred to as the response surface; (2) maximizing an acquisition function that trades off exploration and exploitation according to uncertainty and optimality of the response surface

  • We propose a BO algorithm for high-dimensional optimization, which learns a nonlinear feature mapping ∶ RD → Rd to reduce the dimensionality of the inputs, and a reconstruction mapping ∶ Rd → RD based on GPs to evaluate the true objective function, jointly, see Fig. 1

Read more

Summary

Introduction

Bayesian optimization (BO) is a useful model-based approach to global optimization of black-box functions, which are expensive to evaluate (Kushner 1964; Jones et al 1998). We propose a BO algorithm for high-dimensional optimization, which learns a nonlinear feature mapping ∶ RD → Rd to reduce the dimensionality of the inputs, and a reconstruction mapping ∶ Rd → RD based on GPs to evaluate the true objective function, jointly, see Fig. 1. This allows us to optimize the acquisition function in a lower-dimensional feature space, so that the overall BO routine scales to highdimensional problems that possess an intrinsic lower dimensionality. We use constrained maximization of the acquisition function in feature space to prevent meaningless reconstructions

Bayesian optimization
Bayesian optimization in low‐dimensional feature spaces
Manifold Gaussian processes for response surface learning in feature space
Input reconstruction with manifold multi‐output Gaussian processes
Joint training
Computationally efficient mMOGP
Constrained acquisition
Experiments
Additive objective
Non‐additive objective
Nonlinear feature space with non‐additive objective
Sensitivity analysis on real data
Run‐time complexity
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.