Abstract

The assessment of creative problem solving (CPS) is challenging. Elements of an assessment procedure, such as the tasks that are used and the raters who assess those tasks, introduce variation in student scores that do not necessarily reflect actual differences in students’ creative problem solving abilities. When creativity researchers evaluate assessment procedures, they often inspect these elements such as tasks and raters separately. We show the use of Generalizability Theory allows researchers to investigate creativity assessment procedures - and CPS assessments in particular - in a comprehensive and integrated way. In this paper, we first introduce this statistical framework and the choices creativity researchers need to make before applying Generalizability Theory to their data. Then, Generalizability Theory is applied in an analysis of CPS assessment tasks. We highlight how alterations in the nature of the assessment procedure, such as changing the number of tasks or raters, may affect the quality of CPS scores. Furthermore, we present implications for the assessment of CPS and for creativity research in general.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.