Abstract

Differential Evolution algorithm (DE) is a well-known nature-inspired method in evolutionary computations scope. This paper adds some new features to DE algorithm and proposes a novel method focusing on ranking technique. The proposed method is named as Dominance-Based Differential Evolution, called DBDE from this point on, which is the improved version of the standard DE algorithm. The suggested DBDE applies some changes on the selection operator of the Differential Evolution (DE) algorithm and modifies the crossover and initialization phases to improve the performance of DE. The dominance ranks are used in the selection phase of DBDE to be capable of selecting higher quality solutions. A dominance-rank for solution X is the number of solutions dominating X. Moreover, some vectors called target vectors are used through the selection process. Effectiveness and performance of the proposed DBDE method is experimentally evaluated using six well-known benchmarks, provided by CEC2009, plus two additional test problems namely Kursawe and Fonseca & Fleming. The evaluation process emphasizes on specific bi-objective real-valued optimization problems reported in literature.
 Likewise, the Inverted Generational Distance (IGD) metric is calculated for the obtained results to measure the performance of algorithms. To follow up the evaluation rules obeyed by all state-of-the-art methods, the fitness evaluation function is called 300.000 times and 30 independent runs of DBDE is carried out. Analysis of the obtained results indicates that the performance of the proposed algorithm (DBDE) in terms of convergence and robustness outperforms the majority of state-of-the-art methods reported in the literature

Highlights

  • Nature-inspired multi-objective optimization (MOO) algorithms have been extensively used to solve complex MOO problems

  • In order to compare the performance of the multi-objective algorithms, the evaluation is based on some predefined values

  • The mean Inverted Generational Distance (IGD) values obtained by 30 independent runs of the programs and the quality of non-dominated solutions found by performing exactly 300,000 function evaluations are used to compare algorithms

Read more

Summary

Introduction

Nature-inspired multi-objective optimization (MOO) algorithms have been extensively used to solve complex MOO problems. The process of simultaneous optimization of a collection of objective functions is called MOO or vector optimization [1]. MOO is being widely studied in many areas of science and engineering applications. The MOO problems exist in most disciplines and their solutions have been a significant challenge for researchers [2]. Many algorithms have been developed for solving single- and multi-objective optimization problems during recent decades. The multi-objective algorithms are being complicated and robust at the same time to be dealing with the NP-hard problems. To efficiently solve a multi-objective optimization problem, selecting an appropriate and convenient algorithm for the given problem is a must. All recently proposed algorithms should be taken into account. To measure the quality of the obtained outputs and extracted Pareto-front, the IGD values are calculated

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call