Abstract

AbstractTeacher observations are being used for high‐stakes purposes in states across the country, and administrators often serve as raters in teacher evaluation systems. This paper examines how the cognitive aspects of administrators' use of an observation instrument, a modified version of Charlotte Danielson's Framework for Teaching, interact with the complex and dynamic rating contexts in applied settings. Findings suggest that administrators' rating strategies and rating approaches vary as the characteristics of the rating contexts differ. Even shortly after training (and more so as time passed), raters used reasoning strategies not supported by their training to make scoring decisions. We discuss the implications of the findings for the training of raters and the development of evaluation systems in high‐stakes contexts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.