Abstract

Variation in marks awarded, alongside quality of feedback, is an issue whenever large-scale assessment is undertaken. In particular, variation between sessional teaching staff has been studied for decades resulting in many recorded efforts to overcome this issue. Attempts to curtail variation range from moderation meetings, extended training programmes, electronic tools, automated feedback or even audio/video feedback. Decreased marking variation was observed whenever automated marking was used, potentially due to less academic judgment being used by the markers. This article will focus on a case study of three interventions undertaken at Monash University that were designed to address concerns around the variability of marking and the feedback between sessional teaching staff employed in the chemistry teaching laboratories. The interventions included the use of detailed marking criteria, Excel marking spreadsheets and automated marked Moodle reports. Results indicated that more detailed marking criteria had no effect whilst automated processes caused a consistent decrease. This was attributed to a decrease in the academic judgment markers were expected to use. Only the Excel spreadsheet ensured the provision of consistent feedback to students. Sessional teaching staff commented that their marking loads were reduced and the new methods were easy to use.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call