Abstract

Global optimization has been a hot research topic in various engineering applications, where differential evolution (DE) is one of the most popular approaches. Actually, it is inevitable for DE to trap into local optima when dealing with complex optimization problems. Dynamic opposite learning (DOL), which is a new variant of opposition-based learning (OBL), has the potential for enhancing DE, due to its strong exploration capability contributed by the asymmetric and dynamic search space. To balance exploration and exploitation, an adjustable weight parameter of the search space is adopted, yet adjusting the value needs a lot of tests and expert experience. Instead of relying on the additional weight parameter, a mutual learning (ML) strategy, which leads individuals to learn from each other deterministically, is combined with DOL for raising exploitation. The trade-off between exploration and exploitation is guaranteed by randomly switching DOL and ML in the population initialization process and the generation jumping process. A hybrid strategy, named oppositional-mutual learning (OML), is thereby generated, and it is applied for the performance improvement of DE. Benchmarks from CEC 2014, including unimodal, multi-modal, hybrid and composition functions, were adopted to evaluate the performance of the oppositional-mutual learning DE (OMLDE). Numerical results with the comparisons to the state-of-the-art counterparts show that OMLDE has significant advantages of converging to the global optimum on most functions, which also validates the superiority of the novel OML strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call