Abstract

Purpose: The purposes of this study were to develop and assess a rating form for selection of surgical residents, determine the criteria most important in selection, determine the reliability of the assessment form and process both within and across sites, and document differences in procedure and structure of resident selection processes across Canada. Methods: Twelve of 13 English-speaking orthopedic surgery training programs in Canada participated during the 1999 selection year. The critical incident technique was utilized to determine the criteria most important in selection. From these criteria a 10-item rating form was developed with each item on a 5-point scale. Sixty-six candidates were invited for interviews across the country. Each interviewer completed one assessment form for each candidate, and independently ranked all candidates at the conclusion of all interviews. Consensus final rank orders were then created for each residency program. Across all programs, pairwise program-by-program correlations for each assessment parameter were made. Results: The internal consistency of assessment form ratings for each interviewer was moderately high (mean Cronbach’s alpha = 0.71). A correlation between each item and the final rank order for each program revealed that the items work ethic, interpersonal qualities, orthopedic experience, and enthusiasm correlated most highly with final candidate rank orders ( r = 0.5, 0.48, 0.48, 0.45, respectively). The interrater reliabilities (within panels) and interpanel reliabilities (within programs) for the rank orders were 0.67 and 0.63, respectively. Using the Spearman-Brown prophecy formula, it was found that two panels with two interviewers on each panel are required to obtain a stable measure of a given candidate (reliabilities of 0.80). The average pairwise program-by-program correlations were low for the final candidate rank orders (0.14). Conclusions: A method was introduced to develop a standard, reliable candidate assessment form to evaluate residency selection procedures. The assessment form ratings were found to be consistent within interviewers. Candidate assessments within programs (both between interviewers and between panels) were moderately reliable suggesting agreement within programs regarding the relative quality of candidates, but there was very little agreement across programs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.