Abstract

Computer science instructors need to manage the rapid improvement of novice programmers through teaching, self-guided learning, and assessment. Appropriate feedback, both generic and personalised, is essential to facilitate student progress. Automated feedback tools can also accelerate the marking process and allow instructors to dedicate more time to other forms of tuition and students to progress more rapidly. Massive Open Online Courses rely on automated tools for both self-guided learning and assessment.Fault localisation takes a significant part of debugging time. Popular spectrum-based methods do not narrow the potential fault locations sufficiently to assist novices. We therefore use a fast and precise model-based fault localisation method and show how it can be used to improve self-guided learning and accelerate assessment. We apply this to a large selection of actual student coursework submissions, providing more precise localisation within a sub-second response time. We show this using small test suites, already provided in the coursework management system, and on expanded test suites, demonstrating scaling. We also show compliance with test suites does not predictably score a class of almost correct'' submissions, which our tool highlights.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call