Abstract

Welcome to the Black-Box Optimization Benchmarking (BBOB) 2012 workshop!This workshop is a follow up of the BBOB 2009 workshop in Montreal and the BBOB 2010 workshop in Portland. The workshop focuses on benchmarking continuous optimizers, a crucial task to assess the performance of optimizers quantitatively, to understand the weaknesses and strengths of each algorithm and a compulsory path to evaluate new algorithm designs. Because this task is tedious and difficult to realize, we provide the participants with: the choice and implementation of two well-motivated single-objective benchmark function testbeds for noiseless and noisy optimization,the design of an experimental set-up,the generation of data output, andthe post-processing and presentation of the results in graphs and tables.Over the years, the BBOB workshop has established a standard for benchmarking continuous optimizers. The code needed to run the experiments and to post-process the data as well as the data collected during the previous editions of the workshop are available on the website http://coco.gforge.inria.fr/doku.php?id=start.We would like to thank all the participants for their contributions and are happy to announce that we have accepted 19 papers. For further news about the workshop please check our webpage, and subscribe to the mailing lists!

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.