Abstract

The algorithms of multi-objective optimisation had a relative growth in the last years. Thereby, it's requires some way of comparing the results of these. In this sense, performance measures play a key role. In general, it's considered some properties of these algorithms such as capacity, convergence, diversity or convergence-diversity. There are some known measures such as generational distance (GD), inverted generational distance (IGD), hypervolume (HV), Spread($\Delta$), Averaged Hausdorff distance ($\Delta_p$), R2-indicator, among others. In this paper, we focuses on proposing a new indicator to measure convergence based on the traditional formula for Shannon entropy. The main features about this measure are: 1) It does not require tho know the true Pareto set and 2) Medium computational cost when compared with Hypervolume.

Highlights

  • Nowadays, the evolutionary algorithms (EAs) are used to obtain approximate solutions of multiobjective optimisation problems (MOP) and these EAs are called multi-objective evolutionary algorithms (MOEAs)

  • The evolutionary algorithms (EAs) are used to obtain approximate solutions of multiobjective optimisation problems (MOP) and these EAs are called multi-objective evolutionary algorithms (MOEAs). Some of these algorithms are very well-known among the community such that NSGA-II (See [1]), SPEA-II (See [2]), MO-PSO (See [3]) and MO-CMA-ES (See [4])

  • To avoid the phenomenon caused by Pareto relation, some researchers indicates others way of comparative the elements (See [7]) or change into Non-Pareto-based MOEAs, such as indicator-based and aggregation-based approaches (See [8, 9])

Read more

Summary

INTRODUCTION

The evolutionary algorithms (EAs) are used to obtain approximate solutions of multiobjective optimisation problems (MOP) and these EAs are called multi-objective evolutionary algorithms (MOEAs). Another one based on this measure was propose so called Hausdorff Measure (see [12]) which combines the IGD and GD and takes its maximum These indicator is efficient to obtain informations about closeness of the output of some algorithm with the True Pareto set. A classical work (See [16]) established a relationship between the points of PS and gradient informations from the problem (2.1) That connection it is known by Karush-Kuhn-Tucker (KKT) conditions for Pareto optimality that we define as follow: Theorem 1 (KKT Condition [16]). This theorem will be fundamental to this paper because we will use this fact to formulate our proposal

SOME CONVERGENCE INDICATORS
PROPOSAL MEASURE H
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.