Improving efficiency and effectiveness of workplace-based assessment workshop in postgraduate medical education using a conjoint design.
Faculty development for trainers and nurturing feedback literacy in trainees is crucial for effective workplace-based assessments (WBAs) to support trainee competency development. Separate training sessions for trainers and trainees can be challenging when resources are limited. Combined training can optimise resources and foster mutual understanding, although such approaches face challenges related to power dynamics. This study aimed to evaluate the effectiveness of a conjoint WBA workshop in enhancing trainer engagement, improving trainee feedback literacy, and exploring the benefits and challenges of integrating trainers and trainees in a shared learning environment. A mixed-methods study was conducted with 13 trainers and five trainees from the Hong Kong College of Otorhinolaryngologists. Quantitative data were collected using the Feedback Literacy Behaviour Scale for trainees and the Continuing Professional Development-Reaction Questionnaire for trainers. Pre- and post-intervention comparisons were analysed using paired t tests. Qualitative data from focus group interviews were thematically analysed. Quantitative analysis showed statistically significant increases in trainee feedback literacy (P<0.001) and improvements in trainers' beliefs about capabilities and engagement intentions (P<0.05). The qualitative analysis supported these findings and identified three key factors: mutual understanding, clarification of the WBA purpose, and effective instructional design. Participants valued the mutual understanding fostered in the conjoint setting, which aligned expectations and created a supportive learning environment. Conjoint WBA workshops may effectively promote trainer engagement and trainee feedback literacy, aligning expectations and fostering a positive feedback culture. Further research is needed to explore the longitudinal impact and applicability to other specialties.
- Research Article
16
- 10.1111/medu.14960
- Nov 21, 2022
- Medical education
Competency-based medical education (CBME) led to the widespread adoption of workplace-based assessment (WBA) with the promise of achieving assessment for learning. Despite this, studies have illustrated tensions between the summative and formative role of WBA which undermine learning goals. Models of workplace-based learning (WBL) provide insight, however, these models excluded WBA. This scoping review synthesizes the primary literature addressing the role of WBA to guide learning in postgraduate medical education, with the goal of identifying gaps to address in future studies. The search was applied to OVID Medline, Web of Science, ERIC and CINAHL databases, articles up to September 2020 were included. Titles and abstracts were screened by two reviewers, followed by a full text review. Two members independently extracted and analysed quantitative and qualitative data using a descriptive-analytic technique rooted in Billett's four premises of WBL. Themes were synthesized and discussed until consensus. All 33 papers focused on the perception of learning through WBA. The majority applied qualitative methodology (70%), and 12 studies (36%) made explicit reference to theory. Aligning with Billett's first premise, results reinforce that learning always occurs in the workplace. WBA helped guide learning goals and enhanced feedback frequency and specificity. Billett's remaining premises provided an important lens to understand how tensions that existed in WBL have been exacerbated with frequent WBA. As individuals engage in both work and WBA, they are slowly transforming the workplace. Culture and context frame individual experiences and the perceived authenticity of WBA. Finally, individuals will have different goals, and learn different things, from the same experience. Analysing WBA literature through the lens of WBL theory allows us to reframe previously described tensions. We propose that future studies attend to learning theory, and demonstrate alignment with philosophical position, to advance our understanding of assessment-for-learning in the workplace.
- Research Article
50
- 10.1111/medu.14221
- Jul 5, 2020
- Medical Education
Since their introduction, workplace-based assessments (WBAs) have proliferated throughout postgraduate medical education. Previous reviews have identified mixed findings regarding WBAs' effectiveness, but have not considered the importance of user-tool-context interactions. The present review was conducted to address this gap by generating a thematic overview of factors important to the acceptability, effectiveness and utility of WBAs in postgraduate medical education. This review utilised a hermeneutic cycle for analysis of the literature. Four databases were searched to identify articles pertaining to WBAs in postgraduate medical education from the United Kingdom, Canada, Australia, New Zealand, the Netherlands and Scandinavian countries. Over the course of three rounds, 30 published articles were thematically analysed in an iterative fashion to deeply engage with the literature in order to answer three scoping questions concerning acceptability, effectiveness and assessment training. As each round was coded, themes were refined and questions added until saturation was reached. Stakeholders value WBAs for permitting assessment of trainees' performance in an authentic context. Negative perceptions of WBAs stem from misuse due to low assessment literacy, disagreement with definitions and frameworks, and inadequate summative use of WBAs. Effectiveness is influenced by user (eg, engagement and assessment literacy) and tool attributes (eg, definitions and scales), but most fundamentally by user-tool-context interactions, particularly trainee-assessor relationships. Assessors' assessment literacy must be combined with cultural and administrative factors in organisations and the broader medical discipline. The pivotal determinants of WBAs' effectiveness and utility are the user-tool-context interactions. From the identified themes, we present 12 lessons learned regarding users, tools and contexts to maximise WBA utility, including the separation of formative and summative WBA assessors, use of maximally useful scales, and instituting measures to reduce competitive demands.
- Research Article
- 10.17240/aibuefd.2025..-1564394
- Sep 3, 2025
- Abant İzzet Baysal Üniversitesi Eğitim Fakültesi Dergisi
The study examined the effects of interactionist dynamic assessment on EFL learners' feedback literacy in writing, comparing it with traditional written corrective feedback. It also explored learners' perceptions of this assessment approach. Using a quasi-experimental pre-test and post-test design, the study collected both quantitative and qualitative data. Quantitative data were gathered through the Feedback Literacy Scale to address the first research question regarding the impact of interactionist dynamic assessment on feedback literacy. Qualitative data were obtained from interviews to address the second research question concerning learners' perceptions of interactionist dynamic assessment. The findings indicated that interactionist dynamic assessment was more effective than written corrective feedback in enhancing students' feedback literacy. Learners reported predominantly positive experiences with interactionist dynamic assessment, noting improved understanding, increased motivation, a heightened awareness of errors, and a sense of being valued through individualized learning opportunities. Consequently, EFL teachers are encouraged to incorporate interactionist dynamic assessment into their writing classes to enhance the feedback process, improve student feedback uptake, and foster better teacher-student interactions.
- Research Article
48
- 10.1111/medu.12081
- Feb 8, 2013
- Medical Education
Many studies have examined how educational innovations in postgraduate medical education (PGME) impact on teaching and learning, but little is known about effects in the clinical workplace outside the strictly education-related domain. Insights into the full scope of effects may facilitate the implementation and acceptance of innovations because expectations can be made more realistic, and difficulties and pitfalls anticipated. Using workplace-based assessment (WBA) as a reference case, this study aimed to determine which types of effect are perceived by users of innovations in PGME. Focusing on WBA as a recent instance of innovation in PGME, we conducted semi-structured interviews to explore perceptions of the effects of WBA in a purposive sample of Dutch trainees and (lead) consultants in surgical and non-surgical specialties. Interviews conducted in 2011 with 17 participants were analysed thematically using template analysis. To support the exploration of effects outside the domain of education, the study design was informed by theory on the diffusion of innovations. Six domains of effects of WBA were identified: sentiments (affinity with the innovation and emotions); dealing with the innovation; specialty training; teaching and learning; workload and tasks, and patient care. Users' affinity with WBA partly determined its effects on teaching and learning. Organisational support and the match between the innovation and routine practice were considered important to minimise additional workload and ensure that WBA was used for relevant rather than easily assessable training activities. Dealing with WBA stimulated attention for specialty training and placed specialty training on the agenda of clinical departments. These outcomes are in line with theoretical notions regarding innovations in general and may be helpful in the implementation of other innovations in PGME. Given the substantial effects of innovations outside the strictly education-related domain, individuals designing and implementing innovations should consider all potential effects, including those identified in this study.
- Research Article
- 10.5334/pme.1428
- Jul 22, 2025
- Perspectives on medical education
Programmatic Assessment displays the comprehensive picture of a learner's competence through selection of assessment methods and design of organisational systems [1]. This paper describes how the Irish College of GPs (ICGP) designed and implemented a new, national, workplace-based assessment (WBA) system for GP training as part of an ongoing evolution towards Programmatic Assessment, with a focus on assessment-for-learning [1]. Six overlapping workstreams over five years led to success: iterative consultation and design, entrustable professional activities, software design, stepwise implementation, separation of mentor/assessor roles and WBA training embedded in feedback literacy and growth mindset learning. Our design focused on collecting longitudinal, low stakes assessments organised into core competences in a manner to support learners. 18 entrustable professional activities were developed and implemented, along with a software platform designed to enter and display accumulated data. Competence committees assess both qualitative and quantitative data periodically on the learner's journey to oversee progression and make high stakes decisions. We describe the development of the system along with aids and barriers to its adoption. Structured continuous consultation with the training community and constant reference to the educational literature were both important for success. Novel features of our system are the distancing of mentor and assessor roles, the avoidance of recommended minimum numbers of WBA entries, and consideration of the validity and reliability of the system as a whole rather than of the tools.
- Research Article
9
- 10.1186/s40064-016-1748-x
- Feb 20, 2016
- SpringerPlus
In 2010, workplace-based assessment (WBA) was formally integrated as a method of formative trainee assessment into 29 basic and higher specialist medical training (BST/HST) programmes in six postgraduate training bodies in Ireland. The aim of this study is to explore how WBA is being implemented and to examine if WBA is being used formatively as originally intended. A retrospective cohort study was conducted and approved by the institution’s Research Ethics Committee. A profile of WBA requirements was obtained from 29 training programme curricula. A data extraction tool was developed to extract anonymous data, including written feedback and timing of assessments, from Year 1 and 2 trainee ePortfolios in 2012–2013. Data were independently quality assessed and compared to the reference standard number of assessments mandated annually where relevant. All 29 training programmes mandated the inclusion of at least one case-based discussion (max = 5; range 1–5). All except two non-clinical programmes (93 %) required at least two mini-Clinical Evaluation Exercise assessments per year and Direct Observation of Procedural Skills assessments were mandated in 27 training programmes over the course of the programme. WBA data were extracted from 50 % of randomly selected BST ePortfolios in four programmes (n = 142) and 70 % of HST ePortfolios (n = 115) in 21 programmes registered for 2012–2013. Four programmes did not have an eligible trainee for that academic year. In total, 1142 WBAs were analysed. A total of 164 trainees (63.8 %) had completed at least one WBA. The average number of WBAs completed by HST trainees was 7.75 (SD 5.8; 95 % CI 6.5–8.9; range 1–34). BST trainees completed an average of 6.1 assessments (SD 9.3; 95 % CI 4.01–8.19; range 1–76). Feedback—of varied length and quality—was provided on 44.9 % of assessments. The majority of WBAs were completed in the second half of the year. There is significant heterogeneity with respect to the frequency and quality of feedback provided during WBAs. The completion of WBAs later in the year may limit available time for feedback, performance improvement and re-evaluation. This study sets the scene for further work to explore the value of formative assessment in postgraduate medical education.
- Book Chapter
- 10.58532/nbennurcihpsw21
- Oct 27, 2024
Workplace-Based Assessment (WPBA) represents a pivotal shift in medical education, emphasizing realtime evaluation of residents' clinical competence within authentic patient care environments. Unlike traditional assessment tools that focus on knowledge or controlled simulations, WPBAs aim to assess the "Does" level of Miller’s pyramid—direct performance in real-world clinical settings. This model of assessment enhances educational validity by capturing the integration of knowledge, skills, and attitudes required for effective patient care. WPBA tools—such as Mini-Clinical Evaluation Exercises (Mini-CEX), Direct Observation of Procedural Skills (DOPS), Case-Based Discussions (CbDs), and Multi-Source Feedback (MSF)— facilitate formative feedback, promote reflective practice, and support competency-based medical education (CBME). These tools are especially valuable in fostering feedback-rich environments and tracking learner progression over time. Studies from global and Indian settings confirm WPBA's acceptability, educational impact, and role in improving clinical performance. However, challenges remain. Faculty training, time constraints, assessor variability, and the tension between formative and summative use affect the reliability and acceptance of WPBA. Additionally, WPBA’s effectiveness depends heavily on highquality feedback, observation-based assessment, and clearly defined competencies. Entrustment-based scales and tools such as EPA-IC in surgical specialties have enhanced the alignment of assessment with clinical expectations. To strengthen WPBA’s role in medical training, systemic strategies are needed: improved faculty development, clearer assessment frameworks, and integration of feedback into daily practice. Despite limitations, WPBAs represent a robust method to assess and guide the development of clinical competence, professionalism, and communication in postgraduate medical education, aligning assessment closely with clinical reality.
- Research Article
4
- 10.47811/bhj.104
- Nov 18, 2020
- Bhutan Health Journal
Introduction: The Postgraduate Medical Education has witnessed transition from traditional cognitive based to more competency-based learning globally. Khesar Gyalpo University of Medical sciences of Bhutan introduced Competency Based Medical Education (CBME) through implementation of work place-based assessment (WPBA) in June 2018. The primary objective of this initiative was to produce specialist of highest quality and cultivate competency and outcome based, yet learner centered curricula. Methods: The evaluation was conducted in June, 2019. The mixed methods of data collection techniques were utilized such as survey, interview, and review of the documents and focus group discussion. It was to provide understanding of local challenges and needs in implementation of WPBA. Results: A total of 90% of the faculty members and 40% of administrators evaluated were aware of the implementation of WPBA. Majority of the faculty felt that WPBA is beneficial to both faculty and the residents and all residents felt that it’s beneficial in terms of learning. OBGYN residents have been exposed to maximum numbers of WPBA at 20. The maximum numbers of WPBA activities were performed by residents of general practice department which stood at 56. Lack of time as hindrance of practice of WPBA was implicated by 28% of the faculty and 61% residents. Conclusions: Despite WPBA being implemented for a short duration there is a high level of awareness and acceptability among both the residents and faculties as an effective teaching and learning tool.
- Research Article
10
- 10.1186/s12909-020-02299-8
- Oct 23, 2020
- BMC Medical Education
BackgroundThe principle of workplace based assessment (WBA) is to assess trainees at work with feedback integrated into the program simultaneously. A student driven WBA model was introduced and perception evaluation of this teaching method was done subsequently by taking feedback from the faculty as well as the postgraduate trainees (PGs) of a residency program.MethodsDescriptive multimethod study was conducted. A WBA program was designed for PGs in Chemical Pathology on Moodle and forms utilized were case-based discussion (CBD), direct observation of practical skills (DOPS) and evaluation of clinical events (ECE). Consented assessors and PGs were trained on WBA through a workshop. Pretest and posttest to assess PGs knowledge before and after WBA were conducted. Every time a WBA form was filled, perception of PGs and assessors towards WBA, time taken to conduct single WBA and feedback were recorded. Faculty and PGs qualitative feedback on perception of WBA was taken via interviews. WBA tools data and qualitative feedback were used to evaluate the acceptability and feasibility of the new tools.ResultsSix eligible PGs and seventeen assessors participated in this study. A total of 79 CBDs (assessors n = 7 and PGs n = 6), 12 ECEs (assessors n = 6 and PGs n = 5), and 20 DOPS (assessors n = 6 and PGs n = 6) were documented. PGs average pretest score was 55.6%, which was improved to 96.4% in posttest; p value< 0.05. Scores of annual assessment before and after implementation of WBA also showed significant improvement, p value 0.039, Overall mean time taken to evaluate PG’s was 12.6 ± 9.9 min and feedback time 9.2 ± 7.4 min. Mean WBA process satisfaction of assessors and PGs on Likert scale of 1 to 10 was 8 ± 1 and 8.3 ± 0.8 respectively.ConclusionBoth assessors and fellows were satisfied with introduction and implementation of WBA. It gave the fellows opportunity to interact with assessors more often and learn from their rich experience. Gain in knowledge of PGs was identified from the statistically significant improvement in PGs’ assessment scores after WBA implementation.
- Abstract
- 10.1016/j.jvs.2023.03.044
- May 23, 2023
- Journal of Vascular Surgery
Perceptions of the Shared Learning Environment by Vascular and General Surgery Residents
- Research Article
1
- 10.1016/j.nepr.2025.104538
- Oct 1, 2025
- Nurse education in practice
Nursing students' readiness and anxiety regarding medical artificial intelligence: A mixed-methods study.
- Research Article
6
- 10.1186/s12909-023-04840-x
- Nov 6, 2023
- BMC Medical Education
BackgroundSouth Africa (SA) is on the brink of implementing workplace-based assessments (WBA) in all medical specialist training programmes in the country. Despite the fact that competency-based medical education (CBME) has been in place for about two decades, WBA offers new and interesting challenges. The literature indicates that WBA has resource, regulatory, educational and social complexities. Implementing WBA would therefore require a careful approach to this complex challenge. To date, insufficient exploration of WBA practices, experiences, perceptions, and aspirations in healthcare have been undertaken in South Africa or Africa. The aim of this study was to identify factors that could impact WBA implementation from the perspectives of medical specialist educators. The outcomes being reported are themes derived from reported potential barriers and enablers to WBA implementation in the SA context.MethodsThis paper reports on the qualitative data generated from a mixed methods study that employed a parallel convergent design, utilising a self-administered online questionnaire to collect data from participants. Data was analysed thematically and inductively.ResultsThe themes that emerged were: Structural readiness for WBA; staff capacity to implement WBA; quality assurance; and the social dynamics of WBA.ConclusionsParticipants demonstrated impressive levels of insight into their respective working environments, producing an extensive list of barriers and enablers. Despite significant structural and social barriers, this cohort perceives the impending implementation of WBA to be a positive development in registrar training in South Africa. We make recommendations for future research, and to the medical specialist educational leaders in SA.
- Supplementary Content
5
- 10.1002/aet2.10897
- Jul 30, 2023
- AEM Education and Training
ObjectivesResidents in emergency medicine have reported dissatisfaction with feedback. One strategy to improve feedback is to enhance learners’ feedback literacy—i.e., capabilities as seekers, processors, and users of performance information. To do this, however, the context in which feedback occurs needs to be understood. We investigated how residents typically engage with feedback in an emergency department, along with the potential opportunities to improve feedback engagement in this context. We used this information to develop a program to improve learners’ feedback literacy in context and traced the reported translation to practice.MethodsWe conducted a year‐long design‐based research study informed by agentic feedback principles. Over five cycles in 2019, we interviewed residents and iteratively developed a feedback literacy program. Sixty‐six residents participated and data collected included qualitative evaluation surveys (n = 55), educator‐written reflections (n = 5), and semistructured interviews with residents (n = 21). Qualitative data were analyzed using framework analysis.ResultsWhen adopting an agentic stance, residents reported changes to the frequency and tenor of their feedback conversations, rendering the interactions more helpful. Despite reporting overall shifts in their conceptions of feedback, they needed to adjust their feedback engagement depending on changing contextual factors such as workload. These microsocial adjustments suggest their feedback literacy develops through an interdependent process of individual intention for feedback engagement—informed by an agentic stance—and dynamic adjustment in response to the environment.ConclusionsResident feedback literacy is profoundly contextualized, so developing feedback literacy in emergency contexts is more nuanced than previously reported. While feedback literacy can be supported through targeted education, our findings raise questions for understanding how emergency medicine environments afford and constrain learner feedback engagement. Our findings also challenge the extent to which this contextual feedback know‐how can be “developed” purposefully outside of the everyday work.
- Research Article
5
- 10.2196/44831
- May 11, 2023
- JMIR Formative Research
BackgroundMisleading health claims are widespread in the media, and making choices based on such claims can negatively affect health. Thus, developing effective learning resources to enable people to think critically about health claims is of great value. Serious games can become an effective learning resource in this respect, as they can affect motivation and learning.ObjectiveThis study aims to document how user insights and input can inform the concept and development of a serious game application in critical thinking about health claims in addition to gathering user experiences with the game application.MethodsThis was a mixed methods study in 4 successive phases with both qualitative and quantitative data collected in the period from 2020-2022. Qualitative data on design and development were obtained from 4 unrecorded discussions, and qualitative evaluation data were obtained from 1 recorded focus group interview and 3 open-ended questions in the game application. The quantitative data originate from user statistics. The qualitative data were analyzed thematically, and user data were analyzed using nonparametric tests.ResultsThe first unrecorded discussion revealed that the students’ (3 participants’) assessment of whether a claim was reliable or not was limited to performing Google searches when faced with an ad for a health intervention. On the basis of the acquired knowledge of the target group, the game’s prerequisites, and the technical possibilities, a pilot of the game was created and reviewed question by question in 3 unrecorded discussions (6 participants). After adjustments, the game was advertised at the Oslo Metropolitan University, and 193 students tested the game. A correlation (r=0.77; P<.001) was found between the number of replays and total points achieved in the game. There was no demonstrable difference (P=.07) between the total scores of students from different faculties. Overall, 36.3% (70/193) of the students answered the evaluation questions in the game. They used words such as “fun” and “educational” about the experiences with the game, and words such as “motivating” and “engaging” related to the learning experience. The design was described as “varied” and “user-friendly.” Suggested improvements include adding references, more games and modules, more difficult questions, and an introductory text explaining the game. The results from the focus group interview (4 participants) corresponded to a large extent with the results of the open-ended questions in the game.ConclusionsWe found that user insights and inputs can be successfully used in the concept and development of a serious game that aims to engage students to think critically about health claims. The mixed methods evaluation revealed that the users experienced the game as educational and fun. Future research may focus on assessing the effect of the serious game on learning outcomes and health choices in randomized trials.
- Research Article
105
- 10.1097/nnr.0b013e31827337b3
- Jan 1, 2013
- Nursing Research
Most heart failure patients have multiple comorbidities. This study aims to test the moderating effect of comorbidity on the relationship between self-efficacy and self-care in adults with heart failure. Secondary analysis of four mixed methods studies (n = 114) was done. Self-care and self-efficacy were measured using the Self-Care of Heart Failure Index. Comorbidity was measured with the Charlson Comorbidity Index. Parametric statistics were used to examine the relationships among self-efficacy, self-care, and the moderating influence of comorbidity. Qualitative data yielded themes about self-efficacy in self-care and explained the influence of comorbidity on self-care. Most (79%) reported two or more comorbidities. There was a significant relationship between self-care and the number of comorbidities (r = -.25; p = .03). There were significant differences in self-care by comorbidity level (self-care maintenance, F[1, 112], 5.96, p = .019, and self-care management, F[1, 72], 4.66, p = .034). Using moderator analysis of the effect of comorbidity on self-efficacy and self-care, a significant effect was found only in self-care maintenance among those who had moderate levels of comorbidity (b = .620, p = .022, F(change) df[6,48], 5.61, p = .022). In the qualitative data, self-efficacy emerged as an important variable influencing self-care by shaping how individuals prioritized and integrated multiple and often competing self-care instructions. Comorbidity influences the relationship between self-efficacy and self-care maintenance, but only when levels of comorbidity are moderately high. Methods of improving self-efficacy may improve self-care in those with multiple comorbidities.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.