Abstract

AbstractIn recent years, many students in higher education have begun to learn programming languages. In doing so they will complete a variety of programming tasks of varying degrees of complexity. The students need to get consistent and personalized feedback to develop their programming skills. Human markers can provide personalized feedback using traditional manual approaches to assessment, but they may provide inconsistent feedback (especially for long programming solutions) since marking the programming solutions of multiple students can represent a significant workload for them. While full‐automated assessment systems are the best to provide consistent feedback, they may not provide sufficiently personalized feedback for novice programmers. This study develops a novel semi‐automated assessment approach in order to improve efficiency of human marker in the marking process and increase consistency of feedback (for both short and long programming solutions). It advocates the reuse of human marker's comments for similar code snippets, defined as segmented marking in this study. New full and partial marking models are developed based on segmented marking and they are tested by expert markers. The findings show that the two models are similar in efficiency, but that a partial marking approach potentially offers an improved efficiency for longer programming solutions. Such a finding has significant potential to reduce time spent on marking throughout the sector, which would have significant impact on both resourcing and timeliness of feedback.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call