Abstract
This article explores the suitability of static analysis techniques based on the abstract syntax tree (AST) for the automated assessment of early/mid degree level programming. Focus is on fairness, timeliness and consistency of grades and feedback. Following investigation into manual marking practises, including a survey of markers, the assessment of 97 student Java programming submissions is automated using static analysis rules. Initially, no correlation between human provided marks and rule violations is found. This paper investigates why, and considers several improvements to the approaches used for applying static analysis rules. New methods for application are explored and the resulting technique is applied to a second exercise with 95 submissions. The results show a stronger positive correlation with manual assessment, whilst retaining advantages in terms of time cost, pedagogic advantages and instant feedback. This study provides insight into the differences between human assessment and static analysis approaches and highlights several potential pitfalls of simplistic implementations. Finally, this paper concludes that static analysis approaches are appropriate for automated assessment; however, these approaches should be used with care.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.