Abstract

Evaluating the performance assessments of solvers (e.g., for computation programs), known as the solver benchmarking problem, has become a topic of intense study, and various approaches have been discussed in the literature. Such a variety of approaches exist because a benchmark problem is essentially a multicriteria problem. In particular, the appropriate multicriteria decision-making problem can correspond naturally to each benchmark problem and vice versa. In this study, to solve the solver benchmarking problem, we apply the ranking-theory method recently proposed for solving multicriteria decision-making problems. The benchmarking problem of differential evolution algorithms was considered for a case study to illustrate the ability of the proposed method. This problem was solved using ranking methods from different areas of origin. The comparisons revealed that the proposed method is competitive and can be successfully used to solve benchmarking problems and obtain relevant engineering decisions. This study can help practitioners and researchers use multicriteria decision-making approaches for benchmarking problems in different areas, particularly software benchmarking.

Highlights

  • Evaluating the performance of solvers, that is, the problem of solver benchmarking, has attracted significant attention from scientists

  • For each multicriteria decision-making (MCDM) problem, a corresponding benchmark context is presented. e rationale for such a consideration is that a vast array of different approaches for MCDM problems can be used for benchmarking problem analysis

  • Our investigation was based on the concept of a benchmarking context, presented in detail, and the observation that a benchmarking problem is an MCDM problem

Read more

Summary

Introduction

Evaluating the performance of solvers (e.g., computer programs), that is, the problem of solver benchmarking, has attracted significant attention from scientists. E following components of the benchmarking process, including the solver set, problem set, metric for performance assessment, and statistical tools for data processing, are chosen individually according to the researcher’s preferences. We present data for benchmarking in the form of a so-called benchmarking context, that is, a triple 〈S, P, J〉, where S and P are sets of solvers and problems, respectively, and J: S × P ⟶ R is an assessment function (a performance evaluation metric). E rationale for such a consideration is that a vast array of different approaches for MCDM problems can be used for benchmarking problem analysis. Such a multicriteria formulation allows the consideration of Pareto-optimal alternatives (i.e., solvers) as “good” solvers

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.