Abstract

Collecting student feedback is commonplace in universities. These surveys usually include both open-ended questions and Likert-type scale questions but the answers to open questions tend not to be analysed further than simply reading them. Recent research has shown that text mining and machine learning methods can be utilized to extract useful topics from masses of open student feedback. However, to our knowledge, not many off-the-shelf applications exist for processing open-ended student feedback automatically. Additionally, the use of text mining tools may not be available to all educators, as they require in-depth knowledge of text-mining, data analysis, or programming tools. To address this gap the current study presents a tool (Palaute) for analyzing written student feedback using topic modeling and emotion analysis. The utility of this tool is demonstrated with two real-life use cases: First, we analyze student feedback data collected from courses in a software engineering degree programme, and then feedback from all courses organized in a university. In our experiments, the analysis of open-ended feedback revealed that on certain software engineering course modules the workload is perceived as heavy, and on some programming courses the automatic code grader could be improved. The university-wide analysis produced indicators of good teaching quality, such as interesting courses, but also some concrete improvement points like the time given to complete course assignments. Therefore, the use of the tool resulted in actionable improvement points, which could not have been identified using only numeric feedback metrics. Based on the demonstrated utility, this paper describes the design and implementation of our open-source tool.

Highlights

  • In universities the most common way to evaluate the quality of teaching is to analyze feedback collected from the students [1]–[9]

  • The following research questions were formulated: RQ1 What can be learned from the written student feedback with the tool?

  • Topic modeling and emotion analysis can be used in the educational context as a way of creating summaries of the data

Read more

Summary

Introduction

In universities the most common way to evaluate the quality of teaching is to analyze feedback collected from the students [1]–[9]. Student evaluations of teaching (SET) as a measure of teaching quality is limited at best. Education research has shown that SET is not a reliable metric for teaching quality, as student ratings of teaching and student learning are not related [7], [10], [11]. The current study focuses on the added value provided by open-ended, written student feedback. The automatic analysis of open-ended feedback using text mining and machine learning tools is a recent trend in higher education research (see for example [13]–[30]). Extant literature has shown that analysing open-ended feedback can uncover insights which could not be distinguished using quantitative evaluations only

Objectives
Findings
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.