Teacher preparation programs must have instruments yielding accurate and fair appraisal of preservice teachers' classroom teaching. Preservice teachers themselves need valid information about performance to correct difficulties that could impede successful practice and interfere with obtaining a position. College supervisors require a method to evaluate student teachers providing a reliable standard against which they can assess performance with confidence of a replicable finding. Preparation institutions profit from informed and objective decisions about retention of individual candidates and can use aggregated results of candidates to inform program redesign. The students whom candidates teach, their parents, and the public at large are best served when preservice teachers are evaluated with techniques yielding unbiased, rigorous outcomes. The following article serves two purposes: to demonstrate for teacher educators a process for creating a preservice teacher evaluation instrument utilizing the expertise of those making such evaluations; and to describe the development of such a preservice rating instrument currently in use at Saint Mary's College of California. Background Although assessment of teaching performance is a continuing concern of educational researchers and teacher educators, the literature contains few descriptions of efforts to create instruments specifically targeting preservice teachers. All preparing institutions must have means of appraising candidates; most may make use of informal measures and methods that never receive exposure beyond the program or unit from which they arise. Several reports in the ERIC system (Mamchur & Nelson, 1984; Stolworthy, 1990a, 1990b, 1991) rely on instruments of this sort. Preparation institutions may utilize instruments or adaptations of instruments originally developed for use with practicing teachers. Although this approach may have appeal because of the apparent similarity of preservice and inservice activities, the differences between the novice and practicing teacher in level of experience and judgment argue against wholesale use of this strategy. Evidence exists of a nascent movement by preparation institutions and state departments of education to devise local or statewide standardized performance appraisal procedures for evaluation of preservice candidates. Cloud-Silva and Denton (1989) describe the development of the Classroom Observation and Assessment Scale for Teaching Candidates (COAST), a prototype, low-inference observation instrument deductively derived from teacher effectiveness research for use in measuring minimal teaching competencies of preservice teachers in Texas. Powell (1986) reports relationships among ratings of field experience performance of student teachers, supervising teachers, and university coordinators using the Competency Based Teacher Education (CBTE) scale, a modified version of the Teacher Performance Assessment Instruments (TPAI). The CBTE and the TPAI are composed of instruments containing generic teaching competencies, each competency measured by several indicators. The indicators are scored on a Likert-type scale from 1 to 5, yielding an average score for each competency. These examples illustrate the two most common types of measures of assessment of teacher behavior: COAST is a behavior-based, low-inference observation system and the CTBE and TPAI are high-inference rating systems. Although these two terms typically have been used as though they represent qualitatively different technologies, they reflect two ends of the same continuum (Good, Biddle, & Brophy, 1975). Both are based on classroom observation and have inherent weaknesses. Although some (Medley, Soar, & Coker, 1984) laud low-inference systems for their objectivity, these systems nonetheless overlook teacher attributes that cannot be measured merely by counting the number of times certain discrete behaviors occur. …
Read full abstract