Abstract
Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effective for many real-world problems. At each iteration SWO does a complete construction of a solution starting from the empty assignment. Although the construction uses information from previous iterations, the complete rebuilding does mean that SWO is generally effective at diversification but can suffer from a relatively weak intensification. Evolutionary SWO (ESWO) is a recent extension to SWO that is designed to improve the intensification by keeping the good components of solutions and only using SWO to reconstruct other poorer components of the solution. In such algorithms a standard challenge is to understand how the various parameters affect the search process. In order to support the future study of such issues, we propose a formal framework for the analysis of ESWO. The framework is based on Markov chains, and the main novelty arises because ESWO moves through the space of partial assignments. This makes it significantly different from the analyses used in local search (such as simulated annealing) which only move through complete assignments. Generally, the exact details of ESWO will depend on various heuristics; so we focus our approach on a case of ESWO that we call ESWO-II and that has probabilistic as opposed to heuristic selection and construction operators. For ESWO-II, we study a simple problem instance and explicitly compute the stationary distribution probability over the states of the search space. We find interesting properties of the distribution. In particular, we find that the probabilities of states generally, but not always, increase with their fitness. This nonmonotonocity is quite different from the monotonicity expected in algorithms such as simulated annealing.
Highlights
According to Papadimitriou and Steiglitz (1982), a combinatorial problem can be expressed as a model = (S, f ), where S denotes a search space over a finite set ofC 2011 by the Massachusetts Institute of TechnologyEvolutionary Computation 19(3): 405–428J
In this paper, we study a particular version of Evolutionary SWO (ESWO), that we call ESWO-II, and that is set up to capture the intent of ESWO, but to do so using stochastic methods that are susceptible to Markov chain analysis
We have developed a formal framework for extending ESWO to ESWO-II by revising the ESWO’s construction step to enable probabilistic choices among different possible destination states in a flexible way
Summary
The lack of memory of the previous PQ means that it is more straightforward to build a Markov chain model based on the search space, and without having to include the PQ as would (presumably) be necessary to model SWO At this point, we explore the motivations to set up studies of the properties of the algorithm. Our general motivation for carrying out this study is that the Markov chain numerical solution of smaller instances can allow much faster and more precise calculation than simulation would give This enables a more effective study aimed at understanding how the various components within the algorithms interact with key measures such as stationary distributions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.