Abstract

The national school-leaving examination in mathematics in England and Wales includes both formal timed written examinations, and a school-based component. Although originally conceived in quite broad terms, the school-based component has been operationalised in terms of quite a narrow range of problems, typically combinatoric in nature, and generally called ‘investigations’. The marking guides given to teachers for the assessment of these problems are generic, in that they are meant to apply to all such problems. In this article it is argued that generic marking guides are unlikely to be successful, and that teachers engaged in assessing school-based work need to take into account features of the specific activity in which the student engages. A corpus of 20,000 student responses to 80 investigative tasks were analyzed to produce a framework of task-factors. The framework characterises tasks in terms of seven factors arranged in four categories. The first category involves the match between the task metaphor and the intended task and the extent to which the task metaphor is likely to be shared by students. The second category concerns the complexity of the task structure, both in terms of the search-space of the task and the relationship between the dependent and independent variables. The third category involves the complexity of the generalisation, both as a term-to-term and as a position-to-term rule. The fourth category is related to whether the kinds of proof required (or possible) are inductive or deductive.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call