Abstract

Institutions are understandably interested in the profile of their own reputations based upon publicly available data about student experiences. The UK’s National Student Survey (NSS) metrics are integrated into several ‘Good University’ calculations, whereas teaching teams most often use the survey’s text comments to change practices, rather than the metrics directly. There is little information about how messages from the national survey’s text comments relate to the accompanying numerical ratings, partly because text comments are confidential to the institution and unavailable for wide-scale research. We categorised institutional NSS text comments into themes that mirrored those of the original questionnaire. Comparisons were made between frequencies of thematic comments and the national ratings of satisfaction for several subject areas. For the first time, we demonstrate broad agreement between comments about measures of teaching staff and course organisation with the performance of the subject areas (compared to metrics of their national counterparts). These findings are consistent with previous quantitative models predicting the most important factors that most influence overall satisfaction ratings. We intend this study to be a catalyst for other institutions to explore their non-publicly available, textual returns in a similar way. The outcomes of this type of work are pertinent to all countries that use large-scale surveys. However, institutions will need to release findings to a public audience if we are to gain a national/international perspective on this key linkage between publicly available metrics and the associated text comments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call