Abstract

The use of design assignments for teaching, learning, and assessment is considered a signature of technology education. However, there are difficulties in the valid and reliable assessment of features of quality within designerly outputs. In light of recent educational reforms in Ireland, which see the introduction of classroom-based assessments centring on design in the technology subjects, it is paramount that the implementation of design assessment is critically considered. An exploratory study was conducted with a first year cohort of initial technology teacher education students (N = 126) which involved them completing a design assignment and subsequent assessment process through the use of adaptive comparative judgement (ACJ). In considering the use of ACJ as a potential tool for design assessment at post-primary level, data analysis focused on criteria used for assessment. Results indicate that quantitative variables, i.e. the amount of work done, can significantly predict performance (R2 = .333, p < .001), however qualitative findings suggest that quantity may simply align with quality. Further results illustrate a significant yet practically meaningless bias may exist in the judgement of work through ACJ (ϕ = .082, p < .01) and that there was need to use varying criteria in the assessment of design outputs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call