Abstract

Studies of the evaluation of medical students' clinical performance frequently do not differentiate between ratings by house officer and attending staff evaluators. This practice is not appropriate, since research investigations have shown that house officers rate medical students' clinical performance higher and have higher interrater agreement than do attending staff This investigation studies one aspect of the validity of medical students' clinical performance ratings and demonstrates that there are higher correlations between house officer ratings of student knowledge and student cognitive ability scores than there are between attending staff evaluations and student ability scores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call