Abstract

Ninas Pro, which runs C++ workshops for beginner programmers, uses automated feedback tools to give students quick feedback about their code in class. However, many students do not complete exercises with the tools used currently, as they do not know how to fix their code after getting a 'wrong answer' message. In the literature, binary feedback like this has been shown to have negative effects on student engagement, and tutors end up repeating the same feedback to many students for the same exercise. In this work, we have built a tool that groups incorrect solutions by the output they produce, to help tutors identify common programming and logic errors in student code during class. Tutors annotate incorrect outputs with suggestions, which will then be shown to students whenever their code produces the same wrong output. We carried out an exploratory case study to validate our approach, where students were able to fix both logical and presentation errors using the suggestions provided by our tool. Not all errors can be annotated, these must still be reviewed in person by a tutor. We also carried out a usability study where tutors successfully annotated solution groups. As such, the main contribution of this work is a tool that supports the work that tutors do during programming classes, letting them give richer feedback in a more scalable manner.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.