Abstract

The mass shift to Open-Book, Open-Web (OBOW) assessments during the pandemic highlighted new opportunities in Higher Education for developing accessible, authentic assessments that can reduce administrative load. Despite a plethora of research emerging on the effectiveness of OBOW assessments within disciplines, few currently evaluate their effectiveness across disciplines where the assessment instrument can vary significantly. This paper aims to evaluate the experience students across STEM subjects had of OBOW exams to contribute to an evidence-base for emerging post-pandemic assessment policies and strategies. In April 2021, following two cycles of OBOW exams, we surveyed STEM students across a range of subjects to determine their preparation strategy, experiences during the exam, perception of development of higher order cognitive skills, test anxiety, and how they thought these assessments might enhance employability. Overall, students from subjects that use assessment instruments requiring analytical, quantitative-based answers (Maths, Physics, Computer Science and Chemistry) adapted their existing study skills less effectively, felt less prepared and experienced higher levels of stress compared to students of subjects using more qualitative discursive based answers (Biosciences and Geography). We conclude with recommendations on how to enhance the use of OBOW exams: these include supporting and developing more effective study skills, ensuring assessments align with intended learning outcomes, addressing the issue of academic integrity, promoting inclusivity, and encouraging authentic assessment. Based on the outcomes of this study, we strongly advise that assessment policies that foster the whole-scale roll-out of OBOW assessment consider the inter-disciplinary impacts on learner development, staff training and workload resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call