Abstract
Undergraduate medical students at a large academic trauma center are required to manage a series of online virtual trauma patients as a mandatory exercise during their surgical rotation. Clinical reasoning during undergraduate medical education can be difficult to assess. The purpose of the study was to determine whether we could use components of the students' virtual patient management to measure changes in their clinical reasoning over the course of the clerkship year. In order to accomplish this, we decided to determine if the use of scoring rubrics could change the traditional subjective assessment to a more objective evaluation. Two groups of students, one at the beginning of clerkship (Juniors) and one at the end of clerkship (Seniors), were chosen. Each group was given the same virtual patient case, a clinical scenario based on the Advanced Trauma Life Support (ATLS) Primary Trauma Survey, which had to be completed during their trauma rotation. The learner was required to make several key patient management choices based on their clinical reasoning, which would take them along different routes through the case. At the end of the case they had to create a summary report akin to sign-off. These summaries were graded independently by two domain "Experts" using a traditional subjective surgical approach to assessment and by two "Non-Experts" using two internally validated scoring rubrics. One rubric assessed procedural or domain knowledge (Procedural Rubric), while the other rubric highlighted semantic qualifiers (Semantic Rubric). Each of the rubrics was designed to reflect established components of clinical reasoning. Student's t-tests were used to compare the rubric scores for the two groups and Cohen's d was used to determine effect size. Kendall's τ was used to compare the difference between the two groups based on the "Expert's" subjective assessment. Inter-rater reliability (IRR) was determined using Cronbach's alpha. The Seniors did better than the Juniors with respect to "Procedural" issues but not for "Semantic" issues using the rubrics as assessed by the "Non-Experts". The average Procedural rubric score for the Senior group was 59% ± 13% while for the junior group, it was 51% ± 12% (t(80)= 2.715; p = 0.008; Cohen's d = 1.53). The average Semantic rubric score for the Senior group was 31% ± 15% while for the Junior group, it was 28% ± 14% (t(80) = 1.010; p = .316, ns). There was no statistical difference in the marks given to the Senior versus Junior groups by the "Experts" (Kendall's τ = 0.182, p = 0.07). The IRR between the "Non-Experts" using the rubrics was higher than the IRR of the "Experts" using the traditional surgical approach to assessment. The Cronbach's alpha for the Procedural and Semantic rubrics was 0.94 and 0.97, respectively, indicating very high IRR. The correlation between the Procedural rubric scores and "Experts" assessment was approximately r = 0.78, and that between the Semantic rubric and the "Experts" assessment was roughly r = 0.66, indicating high concurrent validity for the Procedural rubric and moderately high validity for the Semantic rubric. Clinical reasoning, as measured by some of its "procedural" features, improves over the course of the clerkship year. Rubrics can be created to objectively assess the summary statement of an online interactive trauma VP for "procedural" issues but not for "semantic" issues. Using IRR as a measure, the quality of assessment is improved using the rubrics. The "Procedural" rubric appears to measure changes in clinical reasoning over the course of 3rd-year undergraduate clinical studies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.