Abstract

The Council of Emergency Medicine Residency Directors (CORD) introduced the standardized letter of recommendation (SLOR) in 1997, and it has become a critical tool for assessing candidates for emergency medicine (EM) training. It has not itself been evaluated since the initial studies associated with its introduction. This study characterizes current SLOR use to evaluate whether it serves its intended purpose of being standardized, concise, and discriminating. This retrospective, multi-institutional study evaluated letters of recommendation from U.S. allopathic applicants to three EM training programs during the 2011-2012 Electronic Residency Application Service (ERAS) application cycle. Distributions of responses to each question on the SLOR were calculated, and the free-text responses were analyzed. Two pilots, performed on five applicants each, assisted in developing a strategy for limiting interrater reliability. Each of the three geographically diverse programs provided a complete list of U.S. allopathic applicants to their program. Upon randomization, each program received a list of coded applicants unique to their program randomly selected for data collection. The number of applicants was selected to reach a goal of approximately 200 SLORs per site (n = 602). Among this group, comprising 278 of 1,498 applicants (18.6%) from U.S. allopathic schools, a total of 1,037 letters of recommendation were written, with 724 (69.8%) written by emergency physicians. SLORs represented 57.9% (602/1037) of all LORs (by any kind of author) and 83.1% (602/724) of letters written by emergency physicians. Three hundred ninety-two of 602 SLORs had a single author (65.1%). For the question on "global assessment," students were scored in the top 10% in 234 of 583 of applications (40.1%; question not answered by some), and 485 of 583 (83.2%) of the applicants were ranked above the level of their peers. Similarly, >95% of all applicants were ranked in the top third compared to peers, for all but one section under "qualifications for emergency medicine." For 405 of 602 of all SLORs (67.2%), one or more questions were left unanswered, while 76 of all SLORs (12.6%) were "customized" or changed from the standard template. Finally, in 291 of 599 of SLORs (48.6%), the word count was greater than the recommended maximum of 200 words. Grade inflation is marked throughout the SLOR, limiting its ability to be discriminating. Furthermore, template customization and skipped questions work against the intention to standardize the SLOR. Finally, it is not uncommon for comments to be longer than guideline recommendations. As an assessment tool, the SLOR could be more discerning, concise, and standardized to serve its intended purpose.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.