Abstract

Teaching programming is an activity that becomes more and more popular. Assessment of students that attend introductory courses in programming can partly be done by presenting simple source code fragments. Students should be able to provide answer to the question: What is the output of the given code? When preparing the code segments, teachers should be aware of the complexity ('weight') of the code. Especially, when preparing many versions of the same test (to assess a huge number of students), they should try to provide same or similar complexity tasks for all students. A possible solution to this problem is to provide automatic generation of questions containing source code segments. In order to achieve complexity consistency in the process of automatic production of questions for programming courses, there must be a way to automatically measure the complexity of source codes. In our previous work, we have defined a source code metric that considers the source code complexity from a perspective of the student's effort required for manual calculation of the program output, if the input is known. The metric measures the complexity using user-specified weight values assigned to each of the operators and branch statements in the code. In this paper we present a new tool that will help improve the accuracy of the code complexity calculation for automatically generated tasks containing source codes. We also describe our preliminary findings from the research that we have conducted using this tool in order to determine appropriate weight values, and provide remarks for the future experiments on the subject.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call