Abstract
No assessment is entirely free of bias. This paper presents findings concerning the way raters in the research group evaluate the extent to which they are influenced by various types of rater bias when grading their students’ written compositions. The sources of bias covered in the article include the teacher’s knowing the student writer and his or her proficiency in English, the difficulty of the writing task, distressful content likely to trigger the rater’s emotional reaction, the test taker’s views clashing with those of the rater, students’ progress, and the like. The data were gathered by the participants in the study via a questionnaire. In addition, the researcher’s interpretation of the respondents’ answers was verified through interviews. Although the two research methods and self-evaluation have their drawbacks, the results reveal interesting, relevant and important information on aspects which make written composition assessment less reliable and valid. The findings confirm the need to raise raters’ awareness of the causes of bias to which they are most susceptible, bringing them closer to effectively addressing the problem of assessment bias. The research involving eleven lecturers teaching Language in Use at the Department of English and American Studies at the Faculty of Arts, University of Ljubljana, is a part of a much larger project based on the author’s PhD thesis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.