Abstract

To the Editor. We read with great interest Dr. Sturpe's description of objective structured clinical examinations (OSCEs) at United States Schools/Colleges of Pharmacy.1 Improving test reliability is a central tenet to the “objective” term in OSCEs. Content specificity is a concern with OSCE assessments and increasing the number of stations vastly improves test reliability.2 Dr. Sturpe's instruction on the number of OSCE stations required for suitable test reliability is very instructive, with a suggested 12-16 stations. Assessment drives learning so the test reliability of assessments should be a key concern for pharmacy educators. While numerous versions of advanced pharmacy practice experience (APPE) evaluations are used at colleges and schools of pharmacy around the country, test reliability of evaluations should be an important consideration. If APPEs were conceptually thought of as analogous to OSCE stations, then together as an OSCE they can speak to a common ability of learners, ie, the ability to practice pharmacy in a number of environments. This ability continuum can range from limited to expansive, but students can fall anywhere along that spectrum. Individual APPE rotation objectives must be linked to terminal school or college outcomes, and overall experience assessments mapped to these required objectives. Overall experience assessments should be standardized between preceptors and sites to ensure that students are assessed in a similar manner. An example using SOAP notes as part of an overall experience assessment, notes should be assessed more than once in a single APPE and then repeated among multiple core APPEs (ie, 3 notes/APPE over 4 APPEs would provide 12 evaluations). Additional “stations” also could be included to complement APPE assessments, similar to the variations that Hodge describes.3 Test reliability should be enhanced with additional rigorous assessments of similar APPE objectives – as long as all evaluations are assessing a similar ability in students. An advantage of including additional assessments is that they provide a reliable, standardized means of critical evaluation for all students. Examples of additional assessments include: a final-year student presentation demonstrating evidence-based medicine skills,4 the National Association of Boards of Pharmacy's Pharmacy Curriculum Outcomes Assessment, or an individual college or school's outcome-based examination prior to APPEs. Undoubtedly, colleges and schools of pharmacy are investing significant resources into experiential programs and sites. How rigorous (ie, reliable) are methods of evaluation? We forward an alternate paradigm for thinking of APPE evaluations using the strengths of an OSCE approach (ie, improved assessment reliability through greater station numbers). Additionally, some colleges and schools interested in performance-based assessment (such as with an OSCE) may be struggling with finding resources to implement this evaluation. Using experiential programs may foster use of an OSCE approach to assessment. Michael J. Peeters, PharmD, MEd,a Craig D. Cox, PharmDb aUniversity of Toledo College of PharmacybTexas Tech University Health Sciences Center School of Pharmacy

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.