Abstract
Computer‐Aided Assessments (CAAs) have been used increasingly at Brunel University for over 10 years to test students' mathematical abilities. Recently, we have focussed on providing very rich feedback to the students; given the work involved in designing and coding such feedback, it is important to study the impact of the interaction between students and feedback. To make feedback more focussed, examination scripts have been analysed to identify common student errors and misconceptions. These have then been used to code distracters in multiple‐choice and responsive numerical input‐type questions. Since random parameters are used in all questions developed, distracters have to be coded as algebraic or algorithmic mal‐rules. This paper reports on the methodology used to identify students' errors and misconceptions and how the evidence collected was used to code the distracters. The paper also provides hard evidence that real learning has taken place while students have interacted with the CAAs. Statistical analyses of exam performance over eight years indicate that students are able to improve performance in subsequent formative and summative assessments provided that they have truly engaged with the CAA, especially by spending time studying the feedback provided.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.