Abstract

It is now common to use automated systems for assessing students' computer programming exercises. Many existing systems determine the correctness of a program by matching its output strings with the ones pre-deflned by the instructor. As a result, even when a student's program would be accepted as correct if marked by a human assessor, it is easily rejected by existing automated assessment systems as incorrect due to minor non-conformance of the program output. This technical limitation of existing systems is frequently a source of student complaints and frustrating learning experience. Common patches to these systems by simple pre-processing before matching the output strings are not satisfactory solutions. Recently, a token pattern approach has been proposed as a better solution by comparing the output tokens instead of characters. In this paper, we report our work of enhancing an existing automated program assessment system in our university by integrating it with the token pattern approach. Our preliminary evaluation shows that the enhanced system does improve the present state in that (1) it achieves progress towards more flexible assessment in a way closer to what a human assessor would normally do, and (2) more programming exercises are now assessable by the enhanced system with much reduced effort.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call