Abstract

This study aimed to examine the constructed response (CR) test tasks in school-based assessment with a focus on construct validity. To this end, the researcher collected 771 CR test items from the midterm and final exam papers of the 2021 academic year from 37 high schools across the country. Three Korean raters participated in analyzing the characteristics of the test tasks based on the modified Bachman and Palmer’s test method facet, and several corpus analysis tools including WordSmith 8.0, Range, LCA, L2SCA and readability calculators were applied. The result of the analysis clearly demonstrates a gap between the practice and the published guidelines of introducing CR test tasks. The analysis of the task characteristics showed that the CR test is mainly intended to assess students’ reading skills rather than writing skills, largely through short answer or completion form of the CR test tasks. The corpus analysis of the language input showed that lexical difficulty, diversity, and complexity, syntactic complexity, and textual difficulty varied depending on three source types: the textbook, the NCSAT (mock CSAT) and others. The findings of the study can give several pedagogical implications in regard to the CR test tasks in school-based assessments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call