Abstract

Performance exams (computer and human simulations) were developed by medical educators to assess complex aspects of clinical skills. In 1995, all fourth medical students at UCLA School of Medicine and Drew University of Medicine and Science were required to take the National Board of Medical Examiner’s computer-based examination (CBX) and standardized patient (SP) exam. Students then rated the validity of different clinical evaluation methods (CBX, SP exam, attending evaluation, resident evaluation, written shelf exams with multiple choice questions (MCQ), and oral examinations) along four parameters: 1) knowledge of medicine, 2) clinical decision making skills, 3) their overall ability to function as a doctor, 4) selecting a potential caregiver for a family member. Results indicated 1) knowledge- the written exams were ranked as the best 2) clinical decision making- the computer exam was ranked the highest 3) overall functioning as a doctor- the attending and resident evaluation process was highest and 4) the attending physician was rated the one most accurate to evaluate a doctor as a potential care giver. Findings indicate that student perceptions of exams matched developers intent, lending support to the need for a multipronged assessment approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call