Abstract

With modern requirements, there is an increasing tendency of considering multiple objectives/criteria simultaneously in many Software Engineering (SE) scenarios. Such a multi-objective optimization scenario comes with an important issue — how to evaluate the outcome of optimization algorithms, which typically is a set of incomparable solutions (i.e., being Pareto nondominated to each other). This issue can be challenging for the SE community, particularly for practitioners of Search-Based SE (SBSE). On one hand, multi-objective optimization could still be relatively new to SE/SBSE researchers, who may not be able to identify the right evaluation methods for their problems. On the other hand, simply following the evaluation methods for general multi-objective optimization problems may not be appropriate for specific SBSE problems, especially when the problem nature or decision maker’s preferences are explicitly/implicitly known. This has been well echoed in the literature by various inappropriate/inadequate selection and inaccurate/misleading use of evaluation methods. In this paper, we first carry out a systematic and critical review of quality evaluation for multi-objective optimization in SBSE. We survey 717 papers published between 2009 and 2019 from 36 venues in seven repositories, and select 95 prominent studies, through which we identify five important but overlooked issues in the area. We then conduct an in-depth analysis of quality evaluation indicators/methods and general situations in SBSE, which, together with the identified issues, enables us to codify a methodological guidance for selecting and using evaluation methods in different SBSE scenarios.

Highlights

  • I N software engineering (SE), it is not uncommon to face a scenario where multiple objectives/criteria need to be considered simultaneously [20], [50]

  • We do not aim to provide a complete review on all parts of the Search-Based SE (SBSE) work, but on the aspects related to the major trends of evaluating solution sets

  • A notable difficulty between the authors was that the evaluations using descriptive statistics and problem-specific indicators are hard to be distinguished. This is due to the fact that most of them are not clearly stated in the studies and there is a wide variety of problem-specific indicators across all Pareto-based SBSE problems

Read more

Summary

Introduction

I N software engineering (SE), it is not uncommon to face a scenario where multiple objectives/criteria need to be considered simultaneously [20], [50] In such scenarios, there is usually no single optimal solution but rather a set of Pareto optimal solutions (termed a Pareto front in the objective space), i.e., solutions that cannot be improved on one objective without degrading on some other objective. This, in contrast with the idea of aggregating objectives (by weighting) into a single-objective problem, provides different trade-offs between the objectives, from which the decision maker (DM) can choose their favorite solution In such Pareto-based optimization, a fundamental issue is to evaluate the quality of solution sets (populations) obtained by computational search methods (e.g., greedy search, heuristics, and evolutionary algorithms) in order to know how well the methods perform.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call