The use of multiple-choice questions (MCQs) in law schools has not gained widespread acceptance, unlike in medical schools where they enjoy global usage. Law Schools traditionally use essay-type/problem-solving questions to assess students. The efficacy of this form of assessment is increasingly being attacked due to increasing numbers of students and advancements in technology that enable students to generate answers. Bloom’s Taxonomy provides for a hierarchy of learning processes, which include ‘remember’, ‘understand’, ‘apply’, ‘analysis’, ‘evaluate’, and ‘create’. Studies on whether MCQs can test higher-order learning processes required in law courses have been inconclusive. We did a retrospective study to investigate whether MCQs are an efficient and effective way of assessing law courses. Underlying this study was the desire to find an alternative mode of assessment to overcome the threats facing the essay type. Results from selected law courses were analyzed in which students’ performance on MCQ tests, oral tests, and final examinations were compared. MCQs were analyzed on a two-dimensional Bloom’s table to establish the extent to which they tested higher-order learning processes. We compared the results of scores on the MCQs with those of oral tests and final essay-type examinations using correlational analysis and one-way analysis of variance (ANOVA). This was for the courses Constitutional Law (n = 22, MCQs = 75), Intellectual Property Rights (n = 113, MCQs = 100), Broadcast & TV Law (n = 11, MCQs = 76), Administrative Law (n = 65, MCQs = 91), and Cyberspace Law (n = 28, MCQs = 101). In general, students performed best on the MCQ test compared to orals and final exams. A two-tailed correlation analysis comparing all five courses showed a strong correlation between MCQs and Orals (r = 0.699, p = 0.189) and a weak correlation between MCQ test and final exam (r = 0.196, p = 0.752). Sixty-eight percent of 75 MCQs in Constitutional Law tested higher-order processes. The mean scores for MCQ (+ = 39.45 + 4.83), orals (+ = 7.64+18.72), and final exam (+ 28.5 + 8.88) showed a best performance for MCGs. ANOVA comparing test scores for MCQs, Orals and Final examination showed a very significant difference (F (2, 63, 0.05) = 38.11, p < 0.0001). Fifty-four out of 100 MCQs for intellectual property Law tested higher-order learning processes. The mean scores for MCQ (+ = 39.78 + 5.22), orals (+ = 18.88+14.66), and final exam (+ 29.0442 + 7.71) showed a best performance for MCQs. ANOVA results were highly significant (F (2,342,0.05) = 125.565, p < 0.0001). In Broadcast & TV Law, 44.7% of 76 MCQs tested higher-order learning processes. The mean scores for MCQ (+ = 43.27+6.89), orals (+ = 31.09+12.53), and final exam (+ 32.09 + 6.69) were different. ANOVA results were very significant (F (2,30,0.05) = 6.057, p < 0.01). There were 91 MCQs in Administrative Law of which 74% tested higher-order processes. The mean scores for the three tests were MCQ (+ = 33.05 + 4.76), orals (+ = 22.89 + 16.08), and final exam (+ 32.58 + 5.58). ANOVA results were very significant with F (2,191,0.05) = 20.388, p < 0.0001. Cyberspace Law had 101 MCQs of which 38.6% tested higher-order learning processes. The mean scores for MCQ (+ = 43.50 + 4.51), orals (+ = 36.43 + 2.27), and final exam (+ 35.04 + 7.71) were different. ANOVA results were very significant (F (2,81,0.05) = 20.375, p < 0.0001). We concluded that MCQs are efficacious and efficient in testing higher-order learning processes. MCQs can be used to assess learning of law courses. We recommend that law schools should embrace MCQs for assessing law courses.
Read full abstract