Compact optimization is an alternative paradigm in the field of metaheuristics requiring a modest use of memory to optimize a problem. As opposed to population-based algorithms, which conduct the search by employing a set of candidate solutions, compact algorithms use a probabilistic model to describe how solutions are distributed over the search space. Compared to other Estimation of Distribution Algorithms, peculiar features such as the use of simple probabilistic models, in which variables are treated independently, and the need for a minimal number of solutions to be sampled to perform the search, make these algorithms suitable for those applications plagued by memory limitations. Compact algorithms show good results on different kinds of optimization problems but often prematurely converge and perform poorly on non-separable. In this paper, we attempt to overcome these limitations by combining compact algorithms with a restart mechanism named Re-Sampled Inheritance (RI) whose purpose is to avoid premature convergence while also inheriting parts of the variables from the best solution found so far. To assess the effect of the RI mechanism, we extensively test various existing compact algorithms, with and without RI, and compare the best RI-based compact algorithm against several competing algorithms on several optimization problems at different dimensionalities. We also evaluate the effect of the RI parameters on the overall algorithmic performance. Our numerical results not only show that RI consistently enhances the performances of compact algorithms, but also shed some light on the effectiveness of different compact logics at handling problems at different dimensionalities.
Read full abstract