Abstract

Student evaluation of teaching (SET) questionnaires are ubiquitously applied in higher education institutions in North America for both formative and summative purposes. Data collected from SET questionnaires are usually item-level data with cross-classified structure, which are characterized by multivariate categorical outcomes (i.e., multiple Likert-type items in the questionnaires) and cross-classified structure (i.e., non-nested students and instructors). Recently, a new approach, namely the cross-classified IRT model, was proposed for appropriately handling SET data. To inform researchers in higher education, in this article, the cross-classified IRT model, along with three existing approaches applied in SET studies, including the cross-classified random effects model (CCREM), the multilevel item response theory (MLIRT) model, and a two-step integrated strategy, was reviewed. The strengths and weaknesses of each of the four approaches were also discussed. Additionally, the new and existing approaches were compared through an empirical data analysis and a preliminary simulation study. This article concluded by providing general suggestions to researchers for analyzing SET data and discussing limitations and future research directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call