Abstract

This study describes a simple set of statistical parameters for assessing the reliability and validity of oral examinations (OE). Traditional feedback to examiners tends to categorize them as 'hawks' or 'doves' on the basis of whether their personal mean mark is above or below the group mean. Our study shows that the mean OE mark on its own is not a good measure of examiner performance. We suggest that inter-rater reliability between examiner pairs is a more satisfactory indicator of reliability and face validity. The correlation between the OE marks given by an examiner and the student's subtotal from written parts of the exam (SUBTOT) is suggested as a useful indicator of OE validity. These measures, as applied to our own student exam results, suggest that our OE examiners are performing at an acceptable standard of agreement (Cohen's Kappa for pass/fail 0.74, p < 0.0001), and support the use of the OE as a method of student assessment.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.