Abstract

E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering students at Newcastle University. Automated marking enables calculation of follow through marks when incorrect answers are used in subsequent parts. However, awarding follow through marks with no further penalty for solutions being fundamentally incorrect leads to non-normally distributed marks. Consequently, it was found that follow through marks should be awarded at 25% or 50% of the total available to produce a normal distribution. Appropriate question design is vital to enable automated method marking in longer style e-assessment with questions being split into multiple steps. Longer calculation questions split into too few parts led to all or nothing style questions and subsequently bi-modal mark distributions, whilst questions separated into too many parts provided too much guidance to students so did not adequately assess the learning outcomes, leading to unnaturally high marks. To balance these factors, we found that longer questions should be split into approximately 3–4 parts, although this is application dependent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call