Abstract

Automatically assessing code for learning purposes is a challenging goal to achieve. On-site courses and online ones developed for distance learning both require automated ways to grade learners’ programs to be able to scale and manage a large public with a limited teaching staff. This paper reviews recent automated code assessment systems. It proposes a systematic review of the possible analyses they can perform with the associated techniques, the kinds of produced feedback and the ways they are integrated in the learning process. It then discusses the key challenges for the development of new automated code assessment systems and the interaction with human grading. In conclusion, the paper draws several recommendations for new research directions and for possible improvements for automatic code assessment.

Highlights

  • Nowadays, computer-science-related courses are delivered in many kinds of training

  • Eighteen references, published between 2005 and 2021 and covering several aspects of techniques and tools to automatically assess codes, have been identified [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]. The review of these papers made it possible to highlight six main concerns related to the development of automated code assessment systems: 1. the kinds of coding/program aspects that should be assessed; 2. the methods and techniques used to analyse the code; 3. the types of feedback that are presented to learners and instructors; 4. the kinds of systems developed to support the automated code assessment; 5. the ways they are integrated in the learning process and used in education; 6. the quality and impact of the automatically produced assessments

  • Since automated code assessment systems are usually used for summative assessment, it is important for instructors to be able to obtain a mark for each submission

Read more

Summary

Introduction

Computer-science-related courses are delivered in many kinds of training. Programming courses are being taught to very large audiences, starting from pupils in primary and secondary schools to young adults in higher education, including people following lifelong learning programs. Instructors facing these rapidly growing audiences and the associated massive amount of code produced by learners to grade are struggling with human resource issues. This resulted in the development of semi or fully automated tools to assist them for code assessment

Motivations
Research Questions
Methodology
Related Work
Automated Code Assessment
Code and Program Aspects
Code Performance
Code Quality
Other Aspects
Methods and Techniques
Static Approaches
Dynamic Approaches
Hybrid Approaches
Modelling
Artificial Intelligence
Feedback
Status
Rubric
Counterexample
Comment
Report
Other Kinds of Feedback
Automated Assessment Tools
Features
Tools and Systems
C Java – Java Agnostic
Security
Integration in the Learning Process
Grading
Active Learning
Learning Behaviour
Cheating and Creativity
Collaboration and Interoperability
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call