Abstract
Automated marking of student programming assignments has long been a goal of IT educators. Much of this work has focused on the correctness of small student programs, and only limited attention has been given to systematic assessment of the effectiveness of student testing. In this work, we introduce SAM (the Seeded Auto Marker), a system for automated assessment of student submissions which assesses both program code and unit tests supplied by the students. Our central contribution is the use of programs seeded with specific bugs to analyse the effectiveness of the students' unit tests. Beginning with our intended solution program, and guided by our own set of unit tests, we create a suite of minor variations to the solution, each seeded with a single error. Ideally, a student's unit tests should not only identify the presence of the bug, but should do so via the failure of as small a number of tests as possible, indicating focused test cases with minimal redundancy. We describe our system, the creation of seeded test programs and report our experiences in using the approach in practice. In particular, we find that students often fail to provide appropriate coverage, and that their tests frequently suffer from their poor understanding of the limitations imposed by the abstraction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.