Abstract

The dynamic constrained optimization problems can be a challenge for the optimization algorithms. They must tackle global optimum detection, as well as the change of the environment. Recently, a novel test suite for dynamic constrained optimization was introduced. Furthermore, three well-performed evolutionary algorithms were compared based on it. The experimental results show that each algorithm performed best for a different type of optimization problem. The objective of our work was to develop an algorithm reflecting requirements arising from the novel test suite and regarding the results provided by the tested algorithms. In this work, we present a novel evolutionary algorithm for dynamic constrained optimization. The algorithm hybridizes the self-organizing migrating algorithm and the covariance matrix adaptation evolution strategy with constraints handling approach. To avoid premature convergence, the best solutions representing feasible regions do not affect the rest of the population. Two clustering methods, exclusion radius, and quantum particles are used to preserve population diversity. The performance is evaluated on the recently published test suite and compared to the three state-of-the-art algorithms. The presented algorithm outperformed these algorithms in most test cases, which indicates the efficiency of the utilized mechanisms.

Highlights

  • Dynamic constrained optimization problems (DCOPs) are problems where the objective function, constraints, or both change over time

  • Despite the relatively large number of EAs developed for solving COPs, the number of EAs focused on the DCOPs remains relatively small

  • The global extremes are situated in the center of the feasible region

Read more

Summary

Introduction

Dynamic constrained optimization problems (DCOPs) are problems where the objective function, constraints, or both change over time. These problems include dynamic scheduling [1], dynamic obstacle avoidance [2] etc. T. Dis the denotes dimension of the the discrete time problem instance or the environmental variable, f (x⃗, t) is the objective function, gj (x⃗, t) is the jth inequality constraint, and hj (x⃗, t) is the jth equality constraint. L and (m − l) denotes the number of inequality and equality constraints, respectively [3]. Evolutionary algorithms can solve various constrained optimization problems from the engineering and economic domain [4,5]. Despite the relatively large number of EAs developed for solving COPs, the number of EAs focused on the DCOPs remains relatively small

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call