Abstract

Automatic assessment of programming exercises is typically based on testing approach. Most automatic assessment frameworks execute tests and evaluate test results automatically, but the test data generation is not automated. No matter that automatic test data generation techniques and tools are available. We have researched how the Java PathFinder software model checker can be adopted to the specific needs of test data generation in automatic assessment. Practical problems considered are: how to derive test data directly from students' programs (i.e., without annotation) and how to visualize and how to abstract test data automatically for students? Interesting outcomes of our research are that with minor refinements generalized symbolic execution with lazy initialization (a test data generation algorithm implemented in PathFinder) can be used to construct test data directly from students' programs without annotation, and that intermediate results of the same algorithm can be used to provide novel visualizations of the test data.

Highlights

  • Typical examples where automated verification techniques are applied are numerous assessment systems widely used in computer science (CS) education (e.g., ACE (Salmela and Tarhio, 2004) and TRAKLA2 (Korhonen et al, 2003) PILOT (Bridgeman et al, 2000)) – especially in systems used for automatic assessment of programming exercises (e.g., ASSYST (Jackson and Usher, 1997), Ceilidh (Benford et al, 1993), JEWL (English, 2004), and SchemeRobo (Saikkonen et al, 2001))

  • Automatic assessment of programming exercises is typically based on testing approach and seldom on deducting the functional behavior directly from the source code (such as static analysis in (Truong et al, 2004))

  • Sequences vs. State Exploration In unit testing of Java programs, test input consists of two parts: 1) explicit arguments for the method and 2) current state of the object

Read more

Summary

Introduction

Typical examples where automated verification techniques are applied are numerous assessment systems widely used in computer science (CS) education (e.g., ACE (Salmela and Tarhio, 2004) and TRAKLA2 (Korhonen et al, 2003) PILOT (Bridgeman et al, 2000)) – especially in systems used for automatic assessment of programming exercises (e.g., ASSYST (Jackson and Usher, 1997), Ceilidh (Benford et al, 1993), JEWL (English, 2004), and SchemeRobo (Saikkonen et al, 2001)). Automatic assessment of programming exercises is typically based on testing approach and seldom on deducting the functional behavior directly from the source code (such as static analysis in (Truong et al, 2004)). It is possible to verify features that are not directly related to the functionality. The term automatic assessment is later on, in this paper, used for test driven assessment of programming exercises

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call