Abstract

There is a continuing rise in studies examining the impact that adaptive comparative judgment (ACJ) can have on practice in technology education. This appears to stem from ACJ being seen to offer a solution to the difficulties faced in the assessment of designerly activity which is prominent in contemporary technology education internationally. Central research questions to date have focused on whether ACJ was feasible, reliable, and offered broad educational merit. With exploratory evidence indicating this to be the case, there is now a need to progress this research agenda in a more systematic fashion. To support this, a critical review of how ACJ has been used and studied in prior work was conducted. The findings are presented thematically and suggest the existence of internal validity threats in prior research, the need for a theoretical framework and the consideration of falsifiability, and the need to justify and make transparent methodological and analytical procedures. Research questions now of pertinent importance are presented, and it is envisioned that the observations made through this review will support the design of future inquiry.

Highlights

  • Technology education is relatively new to national curricula at primary and secondary levels in comparison to subjects such as mathematics, the natural sciences, and modern and classic languages

  • The authors observed a significant difference in that the experimental group on average outperformed the control group and concluded that “our analysis suggests that students who participate in adaptive comparative judgment (ACJ) in the midst of a design assignment reach significantly better levels of achievement than students who do not” (p. 375)

  • It is clear that the validity of ACJ can be qualified in many ways, such as through the careful design of the judging cohort and by making use of misfit statistics

Read more

Summary

Introduction

Technology education is relatively new to national curricula at primary and secondary levels in comparison to subjects such as mathematics, the natural sciences, and modern and classic languages. The resounding answer to these questions is “yes.” ACJ has been shown to be highly reliable in each relevant study which presents reliability statistics (Kimbell, 2012; Bartholomew and Yoshikawa-Ruesch, 2018; Bartholomew and Jones, 2021) and its validity can be seen as tied to the assessors (Buckley et al, 2020a; Hartell and Buckley, 2021) with outputted misfit statistics being useful to audit or gain insight into outlying judges or portfolios (Canty, 2012).

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call