Abstract

BackgroundEvaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the internal validity and reliability of competency constructs from supervisors’ end-of-term summative assessments for prevocational medical trainees.MethodsThe populations were medical trainees preparing for full registration as a medical practitioner (74) and supervisors who undertook ≥2 end-of-term summative assessments (n = 349) from a single institution. Confirmatory Factor Analysis was used to evaluate assessment internal construct validity. The hypothesised competency construct model to be tested, identified by exploratory factor analysis, had a theoretical basis established in workplace-psychology literature. Comparisons were made with competing models of potential competency constructs including the competency construct model of the original assessment. The optimal model for the competency constructs was identified using model fit and measurement invariance analysis. Construct homogeneity was assessed by Cronbach’s α. Reliability measures were variance components of individual competency items and the identified competency constructs, and the number of assessments needed to achieve adequate reliability of R > 0.80.ResultsThe hypothesised competency constructs of “general professional job performance”, “clinical skills” and “professional abilities” provides a good model-fit to the data, and a better fit than all alternative models. Model fit indices were χ2/df = 2.8; RMSEA = 0.073 (CI 0.057-0.088); CFI = 0.93; TLI = 0.95; SRMR = 0.039; WRMR = 0.93; AIC = 3879; and BIC = 4018). The optimal model had adequate measurement invariance with nested analysis of important population subgroups supporting the presence of full metric invariance. Reliability estimates for the competency construct “general professional job performance” indicated a resource efficient and reliable assessment for such a construct (6 assessments for an R > 0.80). Item homogeneity was good (Cronbach’s alpha = 0.899). Other competency constructs are resource intensive requiring ≥11 assessments for a reliable assessment score.ConclusionInternal validity and reliability of clinical competence assessments using judgement-based methods are acceptable when actual competency constructs used by assessors are adequately identified. Validation for interpretation and use of supervisors’ assessment in local training schemes is feasible using standard methods for gathering validity evidence.

Highlights

  • Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use

  • The validation of workplace-based assessments (WBAs) remains an area of ongoing improvement as identified by Kogan and colleagues: “ many tools are available for the direct observation of clinical skills, validity evidence and description of educational outcomes are scarce” [2]

  • An argument-based approach to validation followed by evaluation, an approach long championed by Michael Kane [7,8,9], provides a framework for the evaluation of claims of competency based on assessment scores obtained from many different forms of assessment [10]

Read more

Summary

Introduction

Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. An argument-based approach to validation followed by evaluation, an approach long championed by Michael Kane [7,8,9], provides a framework for the evaluation of claims of competency based on assessment scores obtained from many different forms of assessment [10] Within this framework, the educator states explicitly and in detail the proposed interpretation and use of the assessment scores, and these are followed by evaluation of the plausibility of the proposals [10]. If claims of interpretation and use from an assessment cannot be validated, “they count against the test developer or user” [11] This theory framework for validation is potentially useful for the evaluation of new and established methods of the assessment of postgraduate medical trainees. It should be noted that this approach is one of a number of validity theory proposals that continue to evolve [12,13,14,15]

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.