Abstract

Students must be familiar with clinical skills before starting clinical practice to ensure patients’ safety and enable efficient learning. However, performance is mainly tested in the third or fourth years of medical school, and studies using the validity framework have not been reported in Korea. We analyzed the validity of a performance test conducted among second-year students classified into content, response process, internal structure, relationships with other variables, and consequences according to Messick’s framework. As results of the analysis, content validity was secured by developing cases according to a pre-determined blueprint. The quality of the response process was controlled by training and calibrating raters. The internal structure showed that (1) reliability by generalizability theory was acceptable (coefficients of 0.724 and 0.786, respectively, for day 1 and day 2), and (2) the relevant domains had proper correlations, while the clinical performance examination (CPX) and objective structured clinical examination (OSCE) showed weaker relationships. OSCE/CPX scores were correlated with other variables, especially grade point average and oral structured exam scores. The consequences of this assessment were (1) making students learn clinical skills and study themselves, while causing too much stress for students due to lack of motivation; (2) reminding educators of the need to apply practical teaching methods and to give feedback on the test results; and (3) providing an opportunity for faculty to consider developing support programs. It is necessary to develop the blueprint more precisely according to students’ level and to verify the validity of the response process with statistical methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call