Abstract

Evaluation of the outcomes of undergraduate medical education (UME) is a complex issue, not least because of the time lapse between the educational intervention (the undergraduate curriculum) and the overall outcome (successful performance as an independent medical practitioner). Numerous other variables, including postgraduate education, the clinical practice environment and personal, social and domestic circumstances, all have an impact on professional practice. However, medical educators must try to find ways of evaluating the outcomes of UME, particularly in view of the substantial academic and financial investments made in the course of initiating change over recent years. There is a small but expanding literature in which the relationship between UME and performance as a newly qualified graduate has been used to indicate the success of undergraduate programmes. Traditionally, UME focused on providing the knowledge base thought to be essential for graduating students. Medical schools then handed on their graduates to individual hospitals for their first year of supervised clinical practice. This transition from student to practising doctor is notoriously stressful, a fact usually attributed to the failure of undergraduate schools to equip their students with the wider range of skills and attitudes necessary for independent practice. Some of the problems associated with the transitional period were highlighted recently in a paper illustrating discrepancies between stated undergraduate curricular outcomes and the competency requirements of clinical practice.1 First year residency directors were asked to state their expectations of graduates' skills and competencies at entry into their programmes and after 3 months in practice. It was clear that residency directors expected to set aside significant time and resources during the early weeks of their programmes for training in basic skills, at a level that could have been achieved during UME. This leads to the conclusion that better communication and agreement between those who set undergraduate outcomes and residency directors would avoid wasting time in the clinical arena. Interestingly, the majority of the skills cited in this study as being required at 3 months post graduation are also identified in other documents, notably Tomorrow's Doctors,2 as being required at completion of UME. Many medical schools worldwide use outcomes set by the General Medical Council or an equivalent body to delineate the skills required at graduation, running an end-of-course objective structured clinical examination (OSCE) or similar assessment to establish student competence. Despite this, a recent Danish study showed newly qualified doctors self-reporting poor levels of competence in a range of essential clinical skills.3 These findings are congruent with our own (as yet unpublished) data, which uses self-reporting and objective assessment to show that new graduates lack both confidence and competence in a range of clinical skills, despite having recently passed qualifying clinical examinations. The discrepancy between UME and early clinical performance is further highlighted by reports of only low–moderate correlations between undergraduate student grades in knowledge-based assessments and performance as first year residents.4, 5 Two papers in this issue of Medical Education shed more light on the relationship between UME and early clinical performance and examine two of the mainstays of modern UME: communication skills and OSCEs. Willis and colleagues investigated the attitudes of their graduates to communication skills and their teaching in an undergraduate course.6 They compared the last cohort of pre-registration house officers (PRHOs; equivalent to junior residents) to undertake the traditional curriculum in their medical school with the first cohort of a new course in which communication skills are emphasised. The results suggest that teaching communication skills in UME gives graduates a better cognitive framework for dealing with communication issues, the ability to use more complex communication skills in their daily practice and an appreciation of the therapeutic potential of communication. Probert and colleagues investigated whether the results of a final year OSCE could predict performance as a PRHO more accurately than the results of a traditional long case examination.7 Pre-registration house officer performance was assessed by self- and consultant assessment. Taking into account the differences in examination format, this study showed an inverse correlation between results in the traditional paper and consultant performance ratings, whilst the OSCE showed positive associations with consultant ratings. Whilst neither of these studies are methodologically perfect, they are tantalising in their conclusions, suggesting as they do that our curricular revisions of the last few years may indeed result in demonstrable improvements in individual clinical practice as a PRHO. So where do we go from here? It seems reasonable to conclude from the above that the transition is easier and PRHO performance enhanced in students whose undergraduate curriculum has included elements of skills-based teaching directly relevant to clinical practice at the PRHO level. The failure of traditional assessments to fully predict PRHO performance suggests that other aspects of UME, such as clinical and communication skills, personal and professional development, and individual personality factors play important roles. Furthermore, these can be identified both at early8 and late9 stages during UME, facilitating remedial education and, it is to be hoped, a smoother transition into practice. However, there are a number of important caveats. The aim of UME is not solely to train students to become junior residents or PRHOs and the evidence accrued from studies comparing UME with PRHO/junior resident performance may not be indicative of the most important outcomes in the long term. A recent Australian study compared practice outcomes beyond the PRHO year for graduates of traditional and non-traditional curricula.10 The results suggested that it was the admissions policy of the non-traditional school rather than its undergraduate curriculum that determined the difference in practice outcomes. Furthermore, we should not mistake training for education and must never lose opportunities to motivate and enthuse our students. It remains important that, alongside the core curriculum, we give students opportunities to develop their academic potential in its broadest sense. Medical students still need to be exposed to the learning opportunities associated with the rarities of clinical medicine, to observe the intricate nature of the doctor–patient relationship in many different situations, to be impressed by technological advances and to be inspired by high quality research and its application to medical practice. It is vital for the future of our profession that, in the face of continuing fiscal and political pressure, we still present medicine as the exciting and rewarding career it is. Despite all the methodological reservations, the most important outcome measure for both undergraduate and postgraduate medical education may still be an improvement in career retention rates for doctors in practice at all levels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call