ABSTRACT At its most fundamental level, experimental design has three major structural characteristics: the independent variable, the dependent variable and the control variable(s). Pupil competence in experimental design might involve simply recognizing these three types of variable, and being able to design a ‘fair test’. However, the relationships between these variables within a coherent design strategy must be understood before pupils could be said to have developed a systematic approach to experimental design. There can be little doubt that the ability to design an experiment constitutes a major part of the rationale for the recent development of a process approach to science and the way in which it is taught in schools. In such a rationale it is the pursuit of the methodology of science which is seen as the best route to ensuring a complete science education. It is therefore timely to consider pupil performance in this area. This paper reports performance results in this skill area, derived from packages of questions designed to shed light on the extent to which 11‐ and 13‐year‐old pupils can control variables, and how factors such as question format and context affect their performance. The performance results, augmented by analyses of variance, indicate that it is factors within the questions, rather than the skill itself, which lead to large variations in facility. It is also shown that the theoretical relationship between questions aimed at assessing similar aspects of experimental design is not reflected in terms of pupil performance. This volatility of performance and lack of association between theoretically related questions is also apparent for individual pupils. Moreover, there are aspects of the question itself, rather than its supposed cognitive ‘demand’, which are the most significant performance determinants. It is therefore unwise, and indeed spurious, to measure attainment in one context or in one format only. Quite subtle variations in the way questions are asked, the criteria against which pupils’ responses are judged, and the way in which their responses are scored, can give rise to very different conclusions about levels of attainment. This would indicate that continuous assessment by teachers will be an essential adjunct to the more formal but narrowly focused external tests (Standard Assessment Tasks). There is now an urgent need to clarify what is meant when we talk about the need to ensure a ‘fair test’. Are we simply seeking to make pupils aware of the structural elements of an experiment such as the dependent variable and how to measure it, or are we aiming to make them aware of the need to carry out and to plan well‐designed experiments? If the former is the case, then these results suggest that we are achieving a certain amount of success, presumably because of the recent emphasis on teaching science as a process rather than as a body of facts. However, if the latter is the case, then the data presented in this paper show that there is still much to be done, not least if pupils are to fulfil National Curriculum attainment criteria, and to apply their expertise to novel contexts.