Abstract

Automatic programming assignment assessment is often premised on black-box testing. Grading of student submissions typically relies on functional specifications expressed in terms of expected outputs for given test inputs. Many upper-level courses, however, are centered on concepts that relate to how programs are implemented. In a course that teaches functional programming, for instance, the assignments should require that students use functional programming techniques, even if imperative solutions are supported by the language. When students are required to use certain programming language constructs, algorithms or design strategies as they implement their programs, a different approach to automated assessment is needed. Our strategy is centered on programming assignments designed in such a way that the internals of the assignment implementation can be evaluated through automated testing. A challenge of designing such auto-graded assignments is that both the specifications and the grading tests have a much higher level of complexity compared to plain functional specifications and black-box tests. The specification of a homework assignment must have precise requirements, but not prescribe a certain solution or impair student creativity. Furthermore, test cases should not inadvertently rely on implementation details that were not specified, but must be able to detect forbidden algorithms or language features. The benefits and difficulties of our approach are discussed in this work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call