Abstract

Parameter tuning, that is, to find appropriate parameter settings (or configurations) of algorithms so that their performance is optimized, is an important task in the development and application of metaheuristics. Automating this task, i.e., developing algorithmic procedure to address parameter tuning task, is highly desired and has attracted significant attention from the researchers and practitioners. During last two decades, many automatic parameter tuning approaches have been proposed. This paper presents a comprehensive survey of automatic parameter tuning methods for metaheuristics. A new classification (or taxonomy) of automatic parameter tuning methods is introduced according to the structure of tuning methods. The existing automatic parameter tuning approaches are consequently classified into three categories: 1) simple generate-evaluate methods; 2) iterative generate-evaluate methods; and 3) high-level generate-evaluate methods. Then, these three categories of tuning methods are reviewed in sequence. In addition to the description of each tuning method, its main strengths and weaknesses are discussed, which is helpful for new researchers or practitioners to select appropriate tuning methods to use. Furthermore, some challenges and directions of this field are pointed out for further research.

Highlights

  • O PTIMIZATION methods are extensively required and applied to solve problems from almost all disciplines, Manuscript received September 4, 2018; revised March 29, 2019; accepted June 2, 2019

  • F-Race, which is inspired from the algorithm Hoeffding race [49], [50] in machine learning for model selection, was proposed in [45] and comprehensively studied in [31]

  • HORA enters an iterative procedure consisting of: 1) dynamically creating new candidates in the neighborhood of some best known candidate configurations, i.e., configurations that are preferred in evaluation and 2) evaluating the set of candidate configurations with racing method to discard poor ones according to the statistical evidences [85], [86]

Read more

Summary

A Survey of Automatic Parameter Tuning Methods for Metaheuristics

Abstract—Parameter tuning, that is, to find appropriate parameter settings (or configurations) of algorithms so that their performance is optimized, is an important task in the development and application of metaheuristics. Automating this task, i.e., developing algorithmic procedure to address parameter tuning task, is highly desired and has attracted significant attention from the researchers and practitioners. The existing automatic parameter tuning approaches are classified into three categories: 1) simple generateevaluate methods; 2) iterative generate-evaluate methods; and 3) high-level generate-evaluate methods. These three categories of tuning methods are reviewed in sequence.

INTRODUCTION
AUTOMATIC PARAMETER TUNING
Statement of Parameter Tuning Problem
Classification of Tuning Methods
Brute-Force Approach
SIMPLE GENERATE-EVALUATE METHODS
F-Race
Remarks on Simple Generate-Evaluate Methods
ITERATIVE GENERATE-EVALUATE METHODS
Experimental Design-Based Tuning
Numerical Optimization-Based Tuning
Heuristic Search-Based Methods
Model-Based Optimization Approaches
Remarks on Iterative Generate-Evaluate Methods
HIGH-LEVEL GENERATE-EVALUATE METHODS
Post-Selection Mechanism
Remarks on High-Level Generate-Evaluate Methods
Future Research Prospects
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.