Abstract

In recent years, numerous efficient many-objective optimization evolutionary algorithms have been proposed to find well-converged and well-distributed nondominated optimal solutions. However, their scalability performance may deteriorate drastically to solve large-scale many-objective optimization problems (LSMaOPs). Encountering high-dimensional solution space with more than 100 decision variables, some of them may lose diversity and trap into local optima, while others may achieve poor convergence performance. This article proposes a multipopulation-based differential evolution algorithm, called LSMaODE, which can solve LSMaOPs efficiently and effectively. In order to exploit and explore the exponential decision space, the proposed algorithm divides the population into two groups of subpopulations, which are optimized with different strategies. First, the randomized coordinate descent technique is applied to 10% of individuals to exploit the decision variables independently. This subpopulation maintains diversity in the decision space to avoid premature convergence into local optimum. Second, the remaining 90% of individuals are optimized with the nondominated guided random interpolation strategy, which interpolates individual among three nondominated solutions randomly. The strategy can guide the population convergent toward the nondominated solutions quickly, meanwhile, maintain good distribution in objective space. Finally, the proposed LSMaODE is evaluated on the LSMOP test suites from the scalability in both decision and objective dimensions. The performance is compared against five state-of-the-art large-scale many-objective evolutionary algorithms. The experimental results show that LSMaODE provides highly competitive performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call