Abstract

Automated Programming Assessment Systems (APAS) are used for overcoming problems associated with manually managed programming assignments, such as objective and efficient assessing in large classes and providing timely and helpful feedback. In this paper we survey the literature and software in this field and identify the set of necessary features that make APAS comprehensive - such that it can support all key stages in the assessment process. Put differently, comprehensive APAS is generic enough to meet the demands of “any” computer science course. Despite the vast number of publications, the choice of software turns out to be very limited. We contribute by developing Edgar, a comprehensive open-source APAS which, to the best of our knowledge, exceeds any other similar free and/or open-source tool. Edgar is the result of three years of development and usage in, for the time being, eight courses dealing with various programming languages and paradigms (C, Java, SQL, etc.). Edgar supports various text-based programming languages, multi-correct multiple-choice questions, provides rich exam logging and monitoring infrastructure to prevent potential fraudulent behaviour, and subsequent data analysis and visualization of students' scores, exams, question quality, etc. It can be deployed on all major operating systems and is written in a modular fashion so that it can be adjusted and scaled to a custom fit. We comment on the architecture and present data from real-world use-cases to support these claims. Edgar is in active use today (1000+ students per semester) and it is being constantly developed with new features.

Highlights

  • It’s been nearly 60 years since Hollingsworth [1] reported the first use of automated program for code testing, but the reality is that programming assignments are still managed manually in most classrooms [2]

  • That is why we suggest that Automated Programming Assessment Systems (APAS) must have inbuilt support for question versioning and review process

  • Systems supporting multiple languages within multiple paradigms (ACT Programming Tutor, Ceilidh, Checkpoint CourseMarker/CourseMaster, GAME (2, 2+), Marmoset, Pex4Fun, Submit!, Testovid/Svetovid, Moodle*, WebAssign, Web-CAT) were considered. This initial list was expanded with JACK as it is a multi-paradigm system since it can cope with object oriented (Java, C++) languages and EPML/XML markup languages

Read more

Summary

Introduction

It’s been nearly 60 years since Hollingsworth [1] reported the first use of automated program for code testing, but the reality is that programming assignments are still managed manually in most classrooms [2]. Teachers assess submitted code, performing compilation and testing or just visually scan the solutions. The fact that a single problem can be described with different algorithms and the same algorithm can be implemented in a number of different ways burdens the grading process when the grading is done manually. Well known problems of manual evaluation of programming assignments are the objectivity and consistency of the criteria as well as the quality and timeliness of the feedback received by the student. The lack of feedback can discourage students if they often fail and do not receive assistance to improve [3].

Objectives
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call