Abstract

<span>Programming is a very complex and challenging subject to teach and learn. A strategy guaranteed to deliver proven results has been intensive and continual training. However, this strategy holds an extra workload for the teachers with huge numbers of programming assignments to evaluate in a fair and timely manner. Furthermore, under the current COVID-19 distance teaching circumstances, regular assessment is a fundamental feedback mechanism. It ensures that students engage in learning as well as determines the extent to which they reached the expected learning goals, in this new learning reality. In sum, automating the assessment process will be particularly appreciated by the instructors and highly beneficial to the students. The purpose of this paper is to investigate the feasibility of automatic assessment in the context of computer programming courses. Thus, a prototype based on merging static and dynamic analysis was developed. Empirical evaluation of the proposed grading tool within an introductory C-language course has been presented and compared to manually assigned marks. The outcomes of the comparative analysis have shown the reliability of the proposed automatic assessment prototype.</span>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call