Abstract

It is increasingly important that higher education institutions can audit and evaluate the scope and efficacy of their digital learning resources across various scales. To-date there has been little effort to address this need for a validated, appropriate and simple to execute method that will facilitate such an audit; whether it be at the scale of an individual programme, department, faculty or institution. The data are of increasing value to ensure institutions maintain progress and equity in the student experience as well as for deployment and interpretation of learning analytics. This study presents a generalizable framework for auditing digital learning provision in higher education curricula. The framework is contextualized using a case study in which the audit is conducted across a single faculty in a research-intensive U.K. university. This work provides academics and higher education administrators with key principles and considerations as well as example aims and outcomes.

Highlights

  • It is increasingly important that higher education institutions be able to audit and evaluate the scope and efficacy of their digital learning resources across various scales

  • This study presents a generalizable framework for auditing digital learning in higher education institutions, with the aim of providing a method that allows higher education administrators and academics to monitor and evaluate the deployment of digital learning resources and techniques

  • The presence of either a lower volume of highly interactive resources or many less interactive resources did not more heavily influence overall digital learning scores here. This suggests that despite less interactive resources being easier to develop and deploy, digital learning scores were not effectively penalized by investing more in one resource type than another, so DLS approximately equates the contribution of volume (DLN) with resource interactivity (DLI)

Read more

Summary

Introduction

It is increasingly important that higher education institutions be able to audit and evaluate the scope and efficacy of their digital learning resources across various scales. There are currently numerous smaller ways that digital technology or resources can be integrated into traditionally nondigital teaching and learning practices, such as for species identification in biology field courses (Jeno, Grytnes, & Vandvik, 2017); professional development and peer review (Collins, Cook-Cottone, Robinson, & Sullivan, 2004; Laru, Järvelä, & Clariana, 2012); music creation and evaluation using individual mobile applications (Birch, 2017); and for enhancing learning through the use of interactive, responsive games (Kiili, 2005) The use of such tools and resources needs to be optimized for and appropriate to the learning context, but deployment is frequently spearheaded by enthusiasts or by localized initiatives in selected areas of a curriculum or overall learning experience. This organic and relatively unmoderated spread of digital tools and resources within the curriculum can lead to large variation within the student experience (Gilbert et al, 2007), which is important to understand and visualize if student feedback and the overall student experience are to be analyzed appropriately and developed in a constructive, strategic, and progressive manner

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call