The neoliberalisation of higher education in the UK has placed greater pressure on Learning Development units to demonstrate impact. Together with increased regulatory scrutiny from bodies such as the Office for Students in England (and the monitoring of B3 thresholds, awarding gaps, the Teaching Excellence Framework, and other measures), rigorous evaluation is arguably more crucial than ever. In this workshop, we offered reflections and learning from the evaluation of the four-year Personal Learning Advice Service pilot at the Open University (UK). The Personal Learning Advice (PLA) Service project is an Access and Participation Plan initiative which has delivered a one-to-one and group coaching and mentoring service (Clay et al., 2023) for students from disadvantaged backgrounds since January 2021 (Lochtie and Hillman, 2023). Coaching and mentoring approaches have been used by our team to support students’ learning and study habits, wellbeing, and help-seeking behaviours (Hillman et al., in press). In the first part of this workshop, we shared learning from approaches used to evaluate the work of the service – from narrative and empirical evidence to causal evaluation in randomised controlled trials (TASO, 2022) – and situated this in the wider literature on evaluation in HE (Sabri, 2023). We shared findings from our work and reflected on the standards of evidence in HE (OfS, 2019). In the second part of the workshop, we invited delegates to consider how they measure and evaluate impact in their own context. We also explored relevant LD scholarship and publications. Our presentation aligned with the ALDcon24 theme ‘Building Learning Development for the Future’, and we sought to invite discussion about the way we demonstrate ‘impact’ as Learning Developers.
Read full abstract