Abstract

Using a rater cognition approach, three extant datasets from recent divergent thinking research were used to analyze the use of subjective processes to rate the quality of ideas. Subjective ratings have gained popularity recently and often three classic dimensions are combined into a single score: uncommonness, remoteness, and cleverness. Thus, scoring of ideas or sets of ideas is a demanding task, in particular when a set contains many ideas. In such a situation, cognitive load is expected to be highest and errors are more likely. Using a cumulative ordinal logit model, results suggest that rater disagreement is predicted by the amount of information (complexity) that was coded. Rater disagreement was higher when participants were instructed to be creative (vs. standard instruction) and also a significant interaction of complexity and instruction was found. Simple slope analysis indicated that the influence of complexity on disagreement was less pronounced with a be-creative instruction and that the difference in disagreement between instructions was more pronounced for low-complexity as compared to high-complexity idea sets. Several implications for deriving subjective creativity ratings and training raters are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call