Abstract

There has been considerable interest in assessing the presence and extent of biases in student evaluations of teaching across higher education institutions. Though most studies focus on disparities in numerical ratings, many scholars draw attention to how biases can manifest in how students write about their instructors in open-ended comment portions of student evaluations of teaching. Further, with the proliferation of computational text-analytic tools that have made it possible to analyse ‘big text data’, institutional researchers can now inductively uncover biases in student evaluations of teaching at previously unimaginable scales. In this chapter, we use the case of a committee on the evaluation of teaching at a North American research university to show how one type of computational text analysis – structural topic modelling – can be used to find gender biases in student evaluations of teaching comments after controlling for a variety of student-, instructor-, and course-level factors. The analysis revealed a total of 20 general evaluative themes that students engage to discuss their instructors’ performances across more than 172,000 SETs from 2013 through 2015: 10 ‘strengths’ and 10 ‘weaknesses.’ The analysis also suggested that male and female students may differentially apply these strengths and weaknesses to male and female instructors. We discuss the limitations and promises of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call