Abstract

This work experimentally investigates model-based approaches for optimizing the performance of parameterized randomized algorithms. Such approaches build a response surface model and use this model for finding good parameter settings of the given algorithm. We evaluated two methods from the literature that are based on Gaussian process models: sequential parameter optimization (SPO) (Bartz-Beielstein et al. 2005) and sequential Kriging optimization (SKO) (Huang et al. 2006). SPO performed better “out-of-the-box,” whereas SKO was competitive when response values were log transformed. We then investigated key design decisions within the SPO paradigm, characterizing the performance consequences of each. Based on these findings, we propose a new version of SPO, dubbed SPO+, which extends SPO with a novel intensification procedure and a log-transformed objective function. In a domain for which performance results for other (modelfree) parameter optimization approaches are available, we demonstrate that SPO+ achieves state-of-the-art performance. Finally, we compare this automated parameter tuning approach to an interactive, manual process that makes use of classicalKeywordsSteep DescentSphere FunctionInteractive ApproachBehnken DesignGaussian Process RegressionThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.