Abstract

Given the widespread concern on collaborative problem solving (CPS) skills, there has been an increasing interest in the last few years to explore how to assess them with digital technologies. This study systematically reviewed how CPS skills have been assessed with digital technologies in the literature. A total of 40 articles were reviewed to analyze specific computer-based assessment instruments of CPS skills from four perspectives: research context, theoretical model for assessment, assessment type, and reliability and validity evidence. The results indicate that most tests target a sample of less than 500 junior students. Nine theoretical models are employed for assessing CPS skills, most of which treat these skills as an explicit combination of social and cognitive skills and are applied to a limited range of participants' age levels, collaboration features, and team compositions. A total of 22 tests have been employed and fallen into four types, i.e., the ones with specific predefined messages in human-agent mode, and those with online chat box, videoconferencing, and face-to-face collaboration in human-human mode. Each type of these tests demonstrates great diversities in participants’ age levels, types of CPS task(s), team compositions, types of assessment data, and methods of data recording and scoring. A certain number of tests lack reliability and validity evidence. Our findings are expected to benefit relevant researchers and test developers in terms of providing suggestions for future research which include testing the applicability of theoretical models for assessing CPS skills across a wide range of assessment contexts. In addition, future researchers should improve the development, data processing, and report of those four types of computer-based assessment instruments of CPS skills through different approaches, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call