Abstract

Curricular reforms have dictated using active learning methods like case‐based learning (CBL) to stimulate critical thinking, problem solving, and collaborative learning in preclinical medical and healthcare learners. Despite this transition, most preclinical programs rely on recognition‐recall multiple choice questions for formative and summative assessment. Limited tools exist for assessing preclinical case‐based learning. The purpose of this abstract is to introduce a newly developed rubric for assessing CBL in preclinical learners as well as to share preliminary outcomes on implementation in pharmacology and pathology focused cases.The Endocrine‐Reproductive module (E‐R‐M) at UCF COM previously used a 6‐criteria clinical reasoning rubric (R1), validated for residents, to determine whether second year medical students (M2) ‘mastered’ these competencies: Medical knowledge, Clinical reasoning, Practice‐based improvement, Communication, Management. Unfortunately, formative and summative use of R1 to assess CBL didn't promote improvement beyond “passing” as time progressed within the E‐R‐M as the rubric design was insufficient to assess basic pathology and pharmacology knowledge/application within the CBL, and lacked scaffolded feedback needed for novices to improve CBL problem solving and critical thinking skills. An improved rubric (R2) was developed from R1 criteria using a 10 point scale per criteria above to assess learners in increments designed to rate performance from early preclinical (1) to the practicing clinical level (10), with criteria like ”management” being divided into 10 levels from basic physiologic/pharmacologic principles to complex evidence‐based management. R2 was pilot tested on de‐identified 2017 case 3 team CBL reports ("C3" =diabetes cases rich in pharmacology and pathology) submitted previously to confirm that it measured M2 level content in CBL sessions. R2 was then used to assess 2018 E‐R‐M CBL reports and provide formative feedback to 20 M2 teams. Prior formative feedback using R2 for earlier cases resulted in progressive improvement in criteria performance (Mean case 1=3.9) evident by 2018 E‐R‐M C3 (Mean C3=7.0). In comparing R2 scores from 2017 C3 (same diabetes case where R1 was used for 2017 formative feedback) versus 2018 C3 (where R2 was used for previous formative feedback), the R2 criteria performance mean was significantly higher for 2018 C3 (Mean 2018 C3 =7 vs Mean 2017 C3 = 4 on R2 rubric criteria (out of 10 maximum), significance by t‐test, p<0.01). This pilot data suggests that R2 provides formative assessment and helpful feedback for M2s to improve CBL pharmacology and pathology application, critical thinking, and communication skills. While further study is needed to confirm reliability of R2 for summative CBL scoring, preliminary evaluation supports that R2 may be an excellent formative tool to help preclinical learners master basic CBL competencies related to pharmacology, pathology, or other disciplines. Further, this rubric may be beneficial to healthcare educational programs transitioning to preclinical competency‐based curricula given most competency rubrics in current use are not designed for preclinical assessment.Support or Funding InformationEducational outcome not funded researchThis abstract is from the Experimental Biology 2019 Meeting. There is no full text article associated with this abstract published in The FASEB Journal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call