Abstract

Driven largely by calls for accountability, the use of large‐scale testing is expanding in terms of the number and purposes of testing programmes. At the same time, financial constraints have resulted in attempts to reduce the lengths of such examinations. An examination of the 1994/1995 and 1995/1996 British Columbia Scholarship programme illustrates that differential and unanticipated differences can occur when such changes to the testing programme are made. The removal of a portion of the constructed‐response (CR) and written tasks (WT) items used to identify scholarship recipients resulted in differences in scholarship scores and the identification of scholarship recipients. Further, the differences were found to affect subgroups of students differentially. While there were no differences attributed to gender, higher difference rates were associated with course area (humanities vs. science) or examination session (January vs. June). The results illustrate the complex and contextual impact of changes to examination programmes and the potential consequences of such changes. Test developers and users must make more of an effort to examine the consequences of examination programmes and planned changes upon the students and others who may be affected by the results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call