Abstract

AbstractBackgroundConcept inventories (CIs) are commonly used in engineering disciplines to assess students' conceptual understanding and to evaluate instruction, but educators often use CIs without sufficient evidence that a structured approach has been applied to validate inferences about student thinking.PurposeWe propose an analytic framework for evaluating the validity arguments of CIs. We focus on three types of claims: that CI scores enable one to infer (1) students' overall understanding of all concepts identified in the CI, (2) students' understanding of specific concepts, and (3) students' propensity for misconceptions or common errors.MethodWe applied our analytic framework to three CIs: the Concept Assessment Tool for Statics (CATS), the Statistics Concept Inventory (SCI), and the Dynamics Concept Inventory (DCI).ResultsUsing our analytic framework, we found varying degrees of support for each type of claim. CATS and DCI analyses indicated that the CIs could reliably measure students' overall understanding of all concepts identified in the CI, whereas SCI analyses provided limited evidence for this claim. Analyses revealed that the CATS could accurately measure students' understanding of specific concepts; analyses for the other two CIs did not support this claim. None of the CI analyses provided evidence that the instruments could reliably measure students' misconceptions and common errors.ConclusionsOur analytic framework provides a structure for evaluating CI validity. Engineering educators can apply this framework to evaluate aspects of CI validity and make more warranted uses and interpretations of CI outcome scores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call