Abstract

AbstractPerformance assessments (PAs) offer a more authentic measure of higher order skills, which is ideal for competency‐based education (CBE) especially for students already in the workplace and striving to advance their careers. The goal of the current study was to examine the validity of undergraduate PA score interpretation in the college of IT at a CBE online, higher education institute by evaluating (a) the transparency of cognitive complexity or demands of the task as communicated through the task prompt versus expected cognitive complexity based on its associated rubric aspect and (b) the impact of cognitive complexity on task difficulty. We found that there is a discrepancy in the communicated versus expected cognitive complexity of PA tasks (i.e., prompt vs. rubric) where rubric complexity is higher, on average, than task prompt complexity. This discrepancy negatively impacts reliability but does not affect the difficulty of PA tasks. Moreover, the cognitive complexity of both the task prompt and the rubric aspect significantly impacts the difficulty of PA tasks based on Bloom's taxonomy but not Webb's DOK, and this effect is slightly stronger for the rubric aspect than the task prompt. Discussion centers on how these findings can be used to better inform and improve PA task writing and review procedures for assessment developers as well as customize PAs (their difficulty levels) to different course levels or individual students to improve learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call