Abstract
Measuring the performance of an algorithm for solving multiobjective optimization problem has always been challenging simply due to two conflicting goals, i.e., convergence and diversity of obtained tradeoff solutions. There are a number of metrics for evaluating the performance of a multiobjective optimizer that approximates the whole Pareto-optimal front. However, for evaluating the quality of a preferred subset of the whole front, the existing metrics are inadequate. In this paper, we suggest a systematic way to adapt the existing metrics to quantitatively evaluate the performance of a preference-based evolutionary multiobjective optimization algorithm using reference points. The basic idea is to preprocess the preferred solution set according to a multicriterion decision making approach before using a regular metric for performance assessment. Extensive experiments on several artificial scenarios, and benchmark problems fully demonstrate its effectiveness in evaluating the quality of different preferred solution sets with regard to various reference points supplied by a decision maker.
Highlights
M OST real-world problem solving tasks usually involve multiple incommensurable and conflicting objectives which need to be considered simultaneously
Many efforts have been devoted to developing evolutionary multiobjective optimization (EMO) algorithms, such as elitist nondominated sorting genetic algorithm (NSGA-II) [1]–[3], indicator-based evolutionary algorithms (EAs) [4]–[6], and multiobjective EA based on decomposition (MOEA/D) [7]–[9]
We find that our proposed R-metrics are reliable metrics to evaluate the performance of a preferencebased EMO algorithm, more interestingly, the variation of a certain R-metric (e.g., R-inverted generational distance (IGD) with a time window of ten generations and standard deviation’s threshold τ = 0.1) can be used as a stopping criterion in searching for a preferred solution set
Summary
M OST real-world problem solving tasks usually involve multiple incommensurable and conflicting objectives which need to be considered simultaneously. Many efforts have been devoted to developing evolutionary multiobjective optimization (EMO) algorithms, such as elitist nondominated sorting genetic algorithm (NSGA-II) [1]–[3], indicator-based EA [4]–[6], and multiobjective EA based on decomposition (MOEA/D) [7]–[9] These algorithms, without any additional preference information (or intervention) from a decision maker (DM), are usually designed to obtain a set of solutions that approximate the whole Pareto-optimal set. This paper presents a systematic way, denoted as R-metric, to quantitatively evaluate the quality of preferred solutions obtained by a preference-based EMO algorithm using reference points. Our basic idea is to use a multicriterion decision making (MCDM) approach to preprocess the obtained solutions, according to their satisfaction to the DM’s preference information, before using a regular metric for performance assessment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.