Abstract

Objective structured clinical examination (OSCE) was introduced in 1975 as a standardized tool for objectively assessing clinical competencies - including history-taking, physical examination, communication skills, data interpretation etc. It consists of a circuit of stations connected in series, with each station devoted to assessment of a particular competency using pre-determined guidelines or checklists. OSCE has been used as a tool for both formative and summative evaluation of medical graduate and postgraduate students across the globe. The use of OSCE for formative assessment has great potential as the learners can gain insights into the elements making up clinical competencies as well as feedback on personal strengths and weaknesses. However, the success of OSCE is dependent on adequacy of resources, including the number of stations, construction of stations, method of scoring (checklists and or global scoring), the number of students assessed, and adequate time and money. Lately, OSCE has drawn some criticism for its lack of validity, feasibility, practicality, and objectivity. There is evidence to show that many OSCEs may be too short to achieve reliable results. There are also currently no clear cut standards set for passing an OSCE. It is perceived that OSCEs test the students knowledge and skills in a compartmentalized fashion, rather than looking at the patient as a whole. This article focuses on the issues of validity, objectivity, reliability, and standard setting of OSCE. Presently, the Indian experiences with OSCE are limited and there is a need to sensitise the Indian faculty and students. A cautious approach is desired before it is considered as a supplementary tool to other methods of assessment for the summative examinations in Indian settings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call