Abstract

Very little research has been devoted to evaluating the national English Home Language (HL) curriculum and assessment system. Not only is there a lack of clarity on whether the language subject is being offered at an adequately high level to meet the declared objectives of the curriculum, but the reliability of the results obtained by Grade 12 learners in the exit-level examination has been placed under suspicion. To shed some light on the issue, this study takes a close look at the language component of the school-leaving examination covering the period 2008-2012, to see whether evidence of high language ability can be generated through the current selection of task types and whether the inferred ability can be generalised to non-examination contexts. Of primary interest here are the validity of the construct on which the examination is built and the sub-abilities that are being measured, as well as the validity of the scoring. One of the key findings of the study is that the language papers cannot be considered indicators of advanced and differential language ability, only of basic and general proficiency. The lack of specifications in the design of the examination items and construction of the marking memoranda undermine the validity and reliability of the assessment. As a consequence hereof, the inferences made on the basis of the scores obtained by examinees are highly subjective and cannot be generalised to other domains of language use. The study hopes to draw attention to the importance of the format and design of the examination papers in maintaining educational standards.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call