Abstract

The aim of this study was to assess the reliability and validity of tests assessing medical students' ability to read medical journal articles critically.All third-year students at our medical school (n=125) took three tests between December 2004 and June 2005 as part of a program to teach critical appraisal of medical literature. Each test required students to write a 250-word structured abstract and to answer a set of 6 to 8 appraisal questions. Interrater reliability was assessed with the intra-class correlation coefficient for the first test. Validity was assessed 1) by calculating the correlation coefficient between the mean scores on the three critical appraisal tests and 2) by calculating the correlation coefficient between these tests and eight other tests in other subjects that the students took over the same period.Interrater reliability was satisfactory for the overall score (ICC=0.72) and for the score for the questions alone (ICC=0.73). It was only moderate, however, for the structured abstract test (ICC=0.53) and varied markedly from question to question (ICC range: 0.29-0.86). Intertest correlations were all statistically significant (p<0.001) and the mean score on the critical reading tests was significantly correlated with the mean score on the tests for other subjects that year (r=0.60; p<0.001).The validity and reliability of the tests of critical reading of medical literature are satisfactory but can be improved by developing more explicit scoring templates.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call