Abstract

ABSTRACTA key to collaborative decision making is to aggregate individual evaluations into a group decision. One of its fundamental challenges lies in the difficulty in identifying and dealing with irregular or unfair ratings and reducing their impact on group decisions. Little research has attempted to identify irregular ratings in a collaborative assessment task, let alone develop effective approaches to reduce their negative impact on the final group judgment. In this article, based on the synergy theory, we propose a novel consensus‐based collaborative evaluation (CE) method called Collaborative Evaluation based on rating DIFFerence (CE‐DIFF) for identifying irregular ratings and mitigating their impact on collaborative decisions. CE‐DIFF determines and assigns different weights automatically to individual evaluators or ratings based on the level of consistency of one's ratings with the group assessment outcome through continuous iterations. We conducted two empirical experiments to evaluate the proposed method. The results show that CE‐DIFF has higher accuracy in dealing with irregular ratings than existing CE methods, such as arithmetic mean and trimmed mean. In addition, the effectiveness of CE‐DIFF is independent of group size. This study provides a new and more effective method for collaborative assessment, as well as novel theoretical insights and practical implications on how to improve collaborative assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call