Abstract

Student evaluation surveys provide educational institutions with important feedback regarding the student experience of teaching and courses; however, qualitative comments can contain offensive, insulting or threatening content. Large educational institutions generate thousands of comments per academic term; therefore, manual screening processes to find potentially harmful comments are not generally feasible. We developed a methodology for semi-automated screening of student comments that incorporates a machine learning decision support system and a detailed psychological assessment protocol. In a case study at a large public Australian university, our system identified 4,258 out of 62,049 (6.9%) comments as potentially harmful and requiring further review. Feedback from stakeholders demonstrates that this methodology is useful in reducing staff workload and could be broadly applied to different settings. Implications for practice or policy: Educational institutions can adopt this methodology to dramatically decrease the number of working hours required to screen harmful free-text comments. Researchers can use the proposed psychology-based assessment as an example of how to develop a protocol to categorise comments. Educators and researchers can use this case study to follow best practices to develop their own decision support system that implements free-text comment classifiers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call