Abstract

Randomized experiments are the gold standard for evaluating the effects of changes to real-world systems. Data in these tests may be difficult to collect and outcomes may have high variance, resulting in potentially large measurement error. Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when the noise level is high, limiting its applicability to many randomized experiments. We derive an expression for expected improvement under greedy batch optimization with noisy observations and noisy constraints, and develop a quasi-Monte Carlo approximation that allows it to be efficiently optimized. Simulations with synthetic functions show that optimization performance on noisy, constrained problems outperforms existing methods. We further demonstrate the effectiveness of the method with two real-world experiments conducted at Facebook: optimizing a ranking system, and optimizing server compiler flags.

Highlights

  • Many policies and systems found in Internet services, medicine, economics, and other settings have continuous parameters that affect outcomes of interest that can only be measured via randomized experiments

  • We show that the quasi-Monte Carlo (QMC) integration allows us to handle the increased dimensionality of the integral and makes Noisy expected improvement (NEI) practically useful

  • We show that QMC integration allows the use of many fewer samples to achieve the same integration error and optimization performance, allowing us to efficiently optimize NEI

Read more

Summary

Introduction

Many policies and systems found in Internet services, medicine, economics, and other settings have continuous parameters that affect outcomes of interest that can only be measured via randomized experiments. Extensions of Bayesian optimization to handle noisy observations use heuristics to simplify the acquisition function that can perform poorly with high noise levels. We derive a Bayesian expected improvement under noisy observations and noisy constraints that avoids simplifying heuristics by directly integrating over the posterior of the acquisition function. We show that this can be efficiently optimized via a quasi-Monte Carlo approximation. We have used this method at Facebook to run dozens of optimizations via randomized experiments, and here demonstrate the applicability of Bayesian optimization to A/B testing with two such examples: experiments to tune a ranking system, and optimizing server compiler settings

Prior work on expected improvement
Noisy observations
Constraints
Batch optimization
Alternative acquisition functions
Selecting the best point after Bayesian optimization
Utility maximization and EI with noise
Infeasibility in the noiseless setting
Noisy EI
Efficient quasi-Monte Carlo integration of noisy EI
Synthetic problems
Evaluating QMC performance
Optimization performance compared to heuristics and other methods
Bayesian optimization with real-world randomized experiments
Optimizing machine learning systems
Optimizing server performance
Findings
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.