Abstract

The more criteria a human decision involves, the more inconsistent the decision. This study experimentally examines the effect on the degree of pairwise comparison inconsistency by using the (im)possibility of selecting the criteria for the evaluation and the size of the decision-making problem. A total of 358 participants completed objective and subjective tasks. While the former was associated with one possible correct solution, there was no single correct solution for the latter. The design of the experiment enabled the acquisition of eight groups in which the degree of inconsistency was quantified using three inconsistency indices (the Consistency Index, the Consistency Ratio and the Euclidean distance) and these were analysed by the repeated measures ANOVA. The results show a significant dependence of the degree of inconsistency on the method of determining the criteria for pairwise evaluation. If participants are randomly given the criteria, then with more criteria, the overall inconsistency of the comparison decreases. If the participants can themselves choose the criteria for the comparison, then with more criteria, the overall inconsistency of the comparison increases. This statistical dependence exists only for males. For females, the dependence is the opposite, but it is not statistically significant.

Highlights

  • In the realm of multi-criteria decision making, the process of selecting from options involves the ranking of a finite set of available alternatives

  • That means that there are no significant differences between frequencies across factor levels

  • We investigated inconsistency from the perspective of decision science in general and multi-criteria decision making in particular

Read more

Summary

Introduction

In the realm of multi-criteria decision making, the process of selecting from options involves the ranking of a finite set of available alternatives. As a method for coping with this relatively simple task, pairwise comparisons have been the primary approach for several decades. Comparing alternatives has been a significant topic in fields of study such as cognitive science, decision sciences, psychology and computer science [1,2,3], and has enabled the establishment of modern multi-criteria decision-making methods such as multi-attribute value theory and the analytic hierarchy process [4]. Pairwise comparisons of the alternatives are conducted. The overall ranking is synthesised by using an appropriate algorithm [5]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call