Abstract

Despite growing interest in early childhood computational thinking (CT), there is a lack of validated assessments for children who are emerging readers. This paper presents validity and reliability evidence of a performance-based assessment of CT using item response theory (IRT) from 272 children aged 4–8. Using a two-parameter logistic model IRT model (2PL IRT), we confirmed that model- and item-level fits are acceptable. Item analyses revealed a high discriminability (M = 2.26, SD = 1.12) and a moderate item difficulty (M = −0.21; SD = 0.86), on average, across 19 items. Reliability analysis demonstrated that the assessment was substantially reliable (marginal reliability: rxx = 0.87). Differential item functioning (DIF) analyses indicated that the assessment estimated children's item parameters fairly, regardless of their gender and age. However, we confirmed gaps in latent ability (θ) of CT by gender and age: boys showed higher latent ability of CT than girls, and old children (above 72 months) showed higher latent ability than young children (below 72 months). Findings suggest the assessment is a fair measure that can serve as a reliable and valid tool to assess CT for children who are emerging readers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call