Abstract

You have accessJournal of UrologyCME1 May 2022PD30-08 UROLOGIC TRAINEE SURGICAL ASSESSMENT TOOLS: SEEKING STANDARDIZATION Lauren Conroy, Kyle Blum, Hannah Slovacek, Phillip Mann, and Steven Canfield Lauren ConroyLauren Conroy More articles by this author , Kyle BlumKyle Blum More articles by this author , Hannah SlovacekHannah Slovacek More articles by this author , Phillip MannPhillip Mann More articles by this author , and Steven CanfieldSteven Canfield More articles by this author View All Author Informationhttps://doi.org/10.1097/JU.0000000000002578.08AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookLinked InTwitterEmail Abstract INTRODUCTION AND OBJECTIVE: Assessing a urology trainees’ performance is critical to evaluating longitudinal progress toward surgical autonomy. Presently, there is wide variability in surgical assessment tools used by training programs. We aim to critically analyze the available tools in urologic surgery and assess their validity, reliability, and feasibility in an effort to identify features that may lead to a more standardized assessment pathway. METHODS: The primary literature was reviewed to identify published surgical assessment tools within the past 20 years. Assessments specific to urologic training were included for final review. Each tool was assessed based on its ability to identify performance differences between participants with varying experience (construct validity), its ability to measure the behavior of which it intended to (content validity), if there was agreement amongst rater’s scores (interrater reliability), if there was correlation between individual rater’s scores and overall scores (internal consistency), and if it had been externally validated. RESULTS: Thirty surgical assessment tools were identified, 15 were specific to Urology, Table 1. Of these, six (40.0%) assessed surgical simulations (e.g. robotic trainer) and nine (60.0%) were designed to provide real-time feedback in the operating room. Twelve (80.0%) had some form of validity; eight (53.3%) had construct validity, nine (60.0%) had content validity, and five (33.3%) had both construct and content validity. Six (40.0%) of the tools were significantly reliable, with six (40.0%) demonstrating at least moderate interrater reliability for all of the tool’s components and one showing at least acceptable internal consistency. No tool was externally validated. CONCLUSIONS: There is high variability between available urologic trainee assessment tools. While 15 tools were identified, each had varying degrees of internal validity, and none had been externally validated. This lack of external validity creates the possibility of large inter-assessment variability between tools and invites evaluation discordance between training programs. There is an unmet need for a standardized surgical assessment tool to be incorporated into AUA training programs that is both internally consistent, reliable, and externally valid. Source of Funding: None © 2022 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 207Issue Supplement 5May 2022Page: e513 Advertisement Copyright & Permissions© 2022 by American Urological Association Education and Research, Inc.MetricsAuthor Information Lauren Conroy More articles by this author Kyle Blum More articles by this author Hannah Slovacek More articles by this author Phillip Mann More articles by this author Steven Canfield More articles by this author Expand All Advertisement PDF DownloadLoading ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call