Comparison of Certainty-Based Marking (CBM) and Number Right Scoring (NRS) in Multiple-Choice Question (MCQ) Assessments: A Prospective Cohort Study of Second-Year Medical Students

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

BackgroundMultiple-choice questions (MCQs) with number right scoring (NRS) are now the most preferred assessment method in medical education, which requires program holders to ensure their fairness for profound decision-making and policy arrangements. Including certainty-based marking (CBM) in MCQ exams is suggested to help better distinguish students and thus increase the validity of assessments. This study aims to integrate CBM into MCQ assessments using a relatively new scoring matrix to assess pass/fail rates and exam scores.MethodsStudents in their second year of medical school participated in ten different CBM-MCQ exams. Questions were scored twice: first, traditionally using the NRS, and second, using the CBM (based on correctness and a self-report certainty scale), yielding two different final scores for each student. Exam scores and pass/fail rates were compared using paired t-tests and McNemar tests, respectively.ResultsA total of 935 students in 10 different exams were included in the study. CBM scores were significantly lower than NRS scores (0.82 points; P<0.001). There was a significant shift in the pass/fail classifications (P<0.001). Overall, 34 out of 141 NRS failures (24.1%) passed under CBM, while 85 (10.7%) of the 794 students who passed under NRS failed under CBM. This significant shift was also reported in 5 of 10 exams (P<0.05). Additionally, there was a trend towards worsening scores in CBM compared to NRS. Students with “A” grades decreased from 8.4% to 4.7% while students with “D” grades increased from 15.1% to 20.5%.ConclusionCBM yields significantly different scores compared to NRS, as evidenced by distinct pass/fail rates, suggesting the potential for a better assessment tool. Further studies with different types of assessments are needed to validate this method in terms of reducing cheating and guesswork. However, replacing CBM with conventional scoring methods requires further evidence and consideration.

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.7759/cureus.81590
Effectiveness of Modified Flipped Classrooms Integrating Scenario-Based Questions, Multiple-Choice Question Assessments, and Mind Maps in Blood Physiology.
  • Apr 1, 2025
  • Cureus
  • Mayank Agarwal + 1 more

Background Conventional lectures in medical training may not always foster adequate comprehension and retention of complex physiological concepts. A modified flipped classroom integrating open-ended scenario-based questions (SBQs), multiple-choice questions (MCQs) assessments using an online response recording (ORR) system, and mind maps may enhance student comprehension, engagement, and knowledge retention. This study evaluates first-year Bachelor of Medicine and Bachelor of Surgery (MBBS) students' perceptions of an integrated instructional module on blood physiology. Methods This cross-sectional study was conducted at the All India Institute of Medical Sciences, Raebareli, India, from November 2024 to February 2025, involving 96 first-year MBBS students. The instructional module consisted of 21 structured lectures that utilized a modified flipped classroom approach with pre-class study materials and in-class PowerPoint-based teaching (Microsoft Corp., Redmond, WA). This approach was integrated with mind map navigations, SBQdiscussions, and MCQassessments via Google Forms (Google LLC, Mountain View, CA). Student perceptions were assessed using a validated 14-item questionnaire categorized into three domains: handout quality, comprehension and retention, and satisfaction and engagement. Additionally, two questions assessed the consistency of handout review before classes and the preferred method among SBQs, MCQs, and mind maps for enhancing understanding and retention. Responses were recorded on a five-point Likert scale. Data analysis included descriptive statistics, principal component factor analysis for construct validity, and Cronbach's alpha for reliability assessment. Results Students highly appreciated the quality of structured handouts (4.57 ± 0.42). Students believed that integrating SBQ discussions, MCQs with ORR, and mind maps helped improve knowledge retention and comprehension (4.28 ± 0.44) in a satisfactory and engaging environment (4.49 ± 0.41). Most students acknowledged the benefits of these methods: 91% agreed that MCQs reinforced key concepts, 96% reported an improved understanding through SBQ discussions, and 92% found mind maps helpful for knowledge retention. Additionally, 72% preferred the combined approach of SBQs, MCQs with ORR, and mind maps for comprehension and retention. However, only 26% consistently reviewed the handouts before classes. Conclusion The modified flipped classroom model integrating SBQs, MCQs with ORR, and mind maps helped students comprehend blood physiology while maintaining engagement in lectures. This structured instructional module offers a feasible and effective strategy for enhancing comprehension and engagement in medical education, even with limited resources.

  • Research Article
  • Cite Count Icon 1
  • 10.22329/jtl.v14i1.6300
The Development and Use of a Multiple-Choice Question (MCQ) Assessment to Foster Deeper Learning: An Exploratory Web-Based Qualitative Investigation
  • May 28, 2020
  • Journal of Teaching and Learning
  • Gareth R Davies + 2 more

This paper reports on the development and piloting of a new model of multiple-choice question (MCQ) assessment used in two undergraduate degree modules at a tertiary university. The new model was purposefully designed to promote deeper learning closely aligned with the SOLO taxonomy. Students were invited to participate in an exploratory qualitative study exploring their experience of learning using this new assessment. In total, 13 students completed an online open-ended qualitative questionnaire. Data was analyzed thematically. Four themes were generated: (a) empowered choice, (b) iterative reading, (c) forcing comparison, and (d) justified understandings. Findings suggest that the new model MCQ assessment promoted wider and more prolonged engagement with learning materials and fostered critical comparisons resulting in deeper learning. Limitations in study design mean that further research is merited to develop our model of MCQ assessment and enhance our understanding of students' learning experience.

  • Research Article
  • Cite Count Icon 29
  • 10.1007/s11606-007-0117-4
Relationship Between Peer Assessment During Medical School, Dean’s Letter Rankings, and Ratings by Internship Directors
  • Jan 1, 2007
  • Journal of General Internal Medicine
  • Stephen J Lurie + 4 more

BackgroundIt is not known to what extent the dean’s letter (medical student performance evaluation [MSPE]) reflects peer-assessed work habits (WH) skills and/or interpersonal attributes (IA) of students.ObjectiveTo compare peer ratings of WH and IA of second- and third-year medical students with later MSPE rankings and ratings by internship program directors.Design and ParticipantsParticipants were 281 medical students from the classes of 2004, 2005, and 2006 at a private medical school in the northeastern United States, who had participated in peer assessment exercises in the second and third years of medical school. For students from the class of 2004, we also compared peer assessment data against later evaluations obtained from internship program directors.ResultsPeer-assessed WH were predictive of later MSPE groups in both the second (F = 44.90, P < .001) and third years (F = 29.54, P < .001) of medical school. Interpersonal attributes were not related to MSPE rankings in either year. MSPE rankings for a majority of students were predictable from peer-assessed WH scores. Internship directors’ ratings were significantly related to second- and third-year peer-assessed WH scores (r = .32 [P = .15] and r = .43 [P = .004]), respectively, but not to peer-assessed IA.ConclusionsPeer assessment of WH, as early as the second year of medical school, can predict later MSPE rankings and internship performance. Although peer-assessed IA can be measured reliably, they are unrelated to either outcome.

  • Research Article
  • Cite Count Icon 1
  • 10.1097/acm.0b013e3181ea38b0
University of Cincinnati College of Medicine
  • Sep 1, 2010
  • Academic Medicine
  • Anne Gunderson + 1 more

University of Cincinnati College of Medicine

  • Research Article
  • Cite Count Icon 1
  • 10.1002/ase.2323
Testing anatomy: Dissecting spatial and non-spatial knowledge in multiple-choice question assessment.
  • Aug 2, 2023
  • Anatomical Sciences Education
  • Julie Dickson + 3 more

Limited research has been conducted on the spatial ability of veterinary students and how this is evaluated within anatomy assessments. This study describes the creation and evaluation of a split design multiple-choice question (MCQ) assessment (totaling 30 questions divided into 15 non-spatial MCQs and 15 spatial MCQs). Two cohorts were tested, one cohort received a 2D teaching method in the academic year 2014/15 (male = 15/108, female 93/108), and the second a 3D teaching method in the academic year 2015/16 (male 14/98, female 84/98). The evaluation of the MCQ demonstrated strong reliability (KR-20 = 0.71 2D and 0.63 3D) meaning the MCQ consistently tests the same construct. Factor analysis of the MCQ provides evidence of validity of the split design of the assessment (RR = 1.11, p = 0.013). Neither cohort outperformed on the non-spatial questions (p > 0.05), however, the 3D cohort performed statistically significantly higher on the spatial questions (p = 0.013). The results of this research support the design of a new anatomy assessment aimed at testing both anatomy knowledge and the problem-solving aspects of anatomical spatial ability. Furthermore, a 3D teaching method was shown to increase students' performance on anatomy questions testing spatial ability.

  • Conference Article
  • 10.21125/edulearn.2020.0780
MAKING IT FIT: EXAMINING THE ASSESSMENT OF CONTEXTUAL KNOWLEDGE AND UNDERSTANDING IN THE POSITIVIST ASSESSMENT MODALITY OF MEDICAL EDUCATION
  • Jul 1, 2020
  • Eleanor Hothersall + 4 more

Introduction: Becoming a medical doctor in the United Kingdom (UK) requires completion of an undergraduate medical degree followed by postgraduate clinical training. Undergraduate education has become increasingly standardised over recent decades and is quality assured by the UK General Medical Council (GMC). The GMC are currently in the process of introducing a national assessment for all graduating doctors which will be a requirement of a licence to practice in the UK. This assessment for UK graduates will comprise of a knowledge based exam composed of multiple choice questions (MCQ), and a clinical exam organised by medical schools. The GMC also define UK Medical schools’ curricula, within three main domains of Professional Values and Behaviours, Professional Skills and Professional Knowledge. Within these domains are many learning outcomes (LO) including those relating to sociology, psychology, population health and research methods. These subjects, among others, are highly contextual, and while knowledge is required, it is the evaluation and application of these areas which have a substantive impact on the practice of a doctor. However, these topics will be included within the MCQ component of the new assessment, and are currently included in most UK medical schools' MCQ assessments. There is currently no published evidence for either construct or content validity for these topics assessed in this modality. Methodology: MCQs relevant to sociology, psychology, population health or research methods were identified using tags and keywords from a national bank of MCQs from the UK Medical Schools Council Assessment Alliance. They were categorised by LO, topic, content and task. Pooled psychometric data was examined. MCQs were described as “high” performing using a discrimination or point biserial measures ≥0.2. MCQs were examined to see whether they would be considered to contain flaws according to question writing guidelines. Results: 328 MCQs were identified, of which 113 had been used. 215 MCQs had been rejected during the validation process (65.5%). 26 MCQs assessed psychology (23.0%); 36 population health (31.9%); 47 research methods (41.6%), and 4 sociology (3.5%). Mean overall facility index was 0.55 (sd=0.25), mean discrimination index was 0.15 (0.12), and the mean point biserial was 0.09 (0.12). There were significant differences in facility, discrimination and point biserial measure compared by LO. High performing MCQs generally tested population health or research methods, and were more often knowledge-based. Low performing MCQs were more likely to assess psychology or sociology. Over 40% of MCQs in both groups contained flaws. Conclusion: Some areas of population health, sociology, psychology and research methods can be validly assessed using multiple choice questions, particularly the topics of epidemiology, infectious diseases, occupational health, screening or statistics. Sociology is significantly under-represented. Topics included represent only a small fraction of the required knowledge, and gives no opportunity to test application or assimilation of knowledge. Certain topics may be included because they fit this positivist format, and thus the assessment paradigm, rather than having any content validity. There is an urgent need to develop other assessment tools for population health, sociology, psychology and research methods topics, and to publish more evidence of existing assessment methods.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.32604/csse.2022.019523
I-Quiz: An Intelligent Assessment Tool for Non-Verbal Behaviour Detection
  • Jan 1, 2022
  • Computer Systems Science and Engineering
  • B T Shobana + 1 more

Electronic learning (e-learning) has become one of the widely used modes of pedagogy in higher education today due to the convenience and flexibility offered in comparison to traditional learning activities. Advancements in Information and Communication Technology have eased learner connectivity online and enabled access to an extensive range of learning materials on the World Wide Web. Post covid-19 pandemic, online learning has become the most essential and inevitable medium of learning in primary, secondary and higher education. In recent times, Massive Open Online Courses (MOOCs) have transformed the current education strategy by offering a technology-rich and flexible form of online learning. A key component to assess the learner’s progress and effectiveness of online teaching is the Multiple Choice Question (MCQ) assessment in most of the MOOC courses. Uncertainty exists on the reliability and validity of the assessment component as it raises a qualm whether the real knowledge acquisition level reflects upon the assessment score. This is due to the possibility of random and smart guesses, learners can attempt, as MCQ assessments are more vulnerable than essay type assessments. This paper presents the architecture, development, evaluation of the I-Quiz system, an intelligent assessment tool, which captures and analyses both the implicit and explicit non-verbal behaviour of learner and provides insights about the learner’s real knowledge acquisition level. The I-Quiz system uses an innovative way to analyse the learner non-verbal behaviour and trains the agent using machine learning techniques. The intelligent agent in the system evaluates and predicts the real knowledge acquisition level of learners. A total of 500 undergraduate engineering students were asked to attend an on-Screen MCQ assessment test using the I-Quiz system comprising 20 multiple choice questions related to advanced C programming. The non-verbal behaviour of the learner is recorded using a front-facing camera during the entire assessment period. The resultant dataset of non-verbal behaviour and question-answer scores is used to train the random forest classifier model to predict the real knowledge acquisition level of the learner. The trained model after hyperparameter tuning and cross validation achieved a normalized prediction accuracy of 85.68%.

  • Research Article
  • Cite Count Icon 13
  • 10.1097/00001888-200010001-00033
Gauging the outcomes of change in a new medical curriculum: students' perceptions of progress toward educational goals.
  • Oct 1, 2000
  • Academic medicine : journal of the Association of American Medical Colleges
  • Gregory Makoul + 2 more

After decades of concern about the lack of momentum in reforming medical curricula, a number of schools have introduced significant revisions and innovations in recent years. In most cases, the goals of these changes have followed the general principles promulgated by the Association of American Medical Colleges' (AAMC's) General Professional Education of the Physician (GPEP) and College Preparation for Medicine Report and other similar documents.1,2 Objectives consistent with these goals have been codified and disseminated through the AAMC's Medical School Objectives Project (MSOP).3 Several new educational strategies (e.g., problem-based learning) and course domains (e.g., courses in professional skills and perspectives) have become common elements of the resulting curricular initiatives at many medical schools.4 Given the need to track the effects and effectiveness of change in medical education programs, 5,6,7,8 Makoul developed the Student Perception Survey, 9 which focuses on how students view both the learning environment and their own learning experiences. It was first administered at Northwestern University Medical School in 1993, and has since been used by medical schools at the University of Chicago, Washington University, University of Utah, Medical University of South Carolina and, most recently, the University of Minnesota at Duluth. This study limits analysis to data collected at Northwestern between 1993 and 1999. Context In 1993, Northwestern University Medical School implemented a totally new first- and second-year (M1-M2) curriculum. Other, less sweeping, changes in the clinically oriented third- and fourth-year curriculum have been made more incrementally over the past decade, and are not a focus of this report. While some improvements have been made in our nearly seven years of experience with the M1-M2 curriculum, the basic concept and format are still firmly in place. The curriculum is composed of four courses, each presented in a series of discrete, topically focused units.10 Each course and nearly every unit are interdisciplinary in nature and draw faculty from a number of departments; all are managed and funded centrally by the dean's administration. Two areas of emphasis differentiate the current M1–M2 curriculum from its predecessor. The first is a change in the way we expect students to learn medicine. Our students are now explicitly regarded as adult learners, with a wide variety of backgrounds, aptitudes, and learning styles. Adult education models embrace this diversity and provide a framework for continuous self-directed education beyond the formal curriculum. Moreover, the very nature of the profession demands that students learn to “think on their feet,” relating different areas of knowledge one to another and serving as critics of their own and others' reasoning processes. Accordingly, the curriculum provides a variety of learning formats, with an emphasis on interactive, discussion-based small-group activities. In addition, the clinical skills units include peer observation and feedback on a regular basis.10,11 The second emphasis is a dramatic increase in the attention paid to issues of professional perspectives and professional skills. As detailed by Curry and Makoul,4 attention to students' interpersonal skills and attitudes and to the interface of the medical profession with society at large had grown steadily for some years. Not until the early 1990s, however, did schools begin to address these issues comprehensively. Since then, professionalism has become much more visible on the medical education agenda.3 The conceptual framework of patient-centered medicine (also referred to as relationship-centered medicine), which highly values the physician's capacity for empathy, attentive listening, and concern for the patient's perspective,12 has been instrumental in bringing about these changes. The very breadth and comprehensiveness of significant educational reform make it difficult to reliably evaluate the specific impact of any component. Further, consistent with the focus on adult learning and professional development (i.e., we want our students to mature as self-aware professionals), we consider students' perceptions to be an important element of curriculum evaluation. We used the Student Perception Survey as our program evaluation tool because it offers a broad view of students' attitudes and experiences. For instance, we were interested in assessing, over a period of years, whether the new M1-M2 curriculum affected students' perceptions about the importance of key educational goals, and whether it had an effect on their perceived progress toward those goals. Educational Goals: Importance. There is some concern that medical students become less idealistic and more cynical as they progress through the curriculum.13,14 On the other hand, students are likely to place more emphasis on areas relevant to clinical practice as they approach the clinical clerkship phase of their education. To assess whether students place more or less value on key educational goals after their first two years of medical school, we can compare responses to the Student Perception Surveys administered to incoming students with those to surveys administered to the same students at the end of their second year (just before clinical clerkships begin). Since we expect that incoming students will highly value all of the goals, thus generating a ceiling effect, we do not expect the importance ratings to rise. Neither do we expect them to fall, since the new curriculum attempts to reinforce the value of these goals. Thus, our expectations regarding importance ratings are phrased as our first (null) hypothesis: There will be no statistically significant difference in the importance ratings when Student Perceptions Surveys administered to incoming students are compared with those administered at the end of the second year. Educational Goals: Progress. Attending physicians' comments regarding the readiness and performances of students in their clerkships provide one good indication of whether a new M1-M2 curriculum is effective. However, it is difficult to systematically evaluate progress toward a variety of goals with such a method. Since we have a pass—fail grading system, the only grade-like metric available is the U.S. Medical Licensing Examination (USMLE) Step 1 score, also poorly suited to address a diverse set of goals. The Student Perception Survey allows us to assess students' views about the extent to which the curriculum has helped them progress toward each of the goals listed in Table 1. A brief “In Progress” article published in Academic Medicine reported immediate positive changes in ten of the 16 educational goals when data collected from the class of 1996, which progressed through the first two years before the curriculum was implemented, were compared with data from the classes of 1997 and 1998, the first cohorts to complete the new M1-M2 curriculum.9 Since we expect the revised curriculum to prove effective in maintaining those changes, we offer the second hypothesis: Students who have progressed through the new curriculum will report more progress toward educational goals than will students who completed the survey before the new curriculum was in place.TABLE 1: Responses to Importance of Educational Goals Section of the Student Perception Survey by Incoming and Experienced Students at Northwestern University Medical School, Classes of 1997–2001*Method Student Perception Survey. The survey gathers information about medical students' perceptions regarding faculty contact, educational goals, educational activities, and patient-centered tasks of care. It also gauges learning orientation, social orientation, career plan, conceptions of health, and demographic information. It is administered longitudinally via scan-form or computer: once at the beginning of medical school (i.e., during orientation week) and again at the end of the second year (i.e., just before clerkships). (We ran a study in 1997 to compare pencil-and-paper, scan-form, and computer versions of the survey; no difference in response patterns was detected.) This report includes data collected at both time points from students in the classes of 1996–2001. The survey is usually completed by all students in each cohort; it was distributed to fewer second-year students in 1995 and 1996, and fewer incoming students in 1998, due to administrative errors. Social security numbers serve as identification tags, allowing us to match surveys from incoming and experienced students without accessing their names or creating another set of identification numbers. Educational Goals. In 1990, the dean, with the approval of all department chairs and senior deans, established eight goals for medical school education.10 The 16 goals assessed in the Educational Goals section of the Student Perception Survey (see Table 1) were developed by explicating these original eight (e.g., operationalizing “communication”) and then expanding the list to include four additional goals expressed by faculty who had developed the new curriculum for the first two years of medical school. Table 1 indicates which of the goals were added. Nunnally emphasized that the plan and procedure of an item's generation is a primary determinant of its content validity.15 Drawing the items directly from goals outlined by the medical school certainly enhanced content validity. Further support comes from the observation that these goals are not unique to Northwestern; they are reflected in blueprints for medical education,1,2,3 deemed relevant by the other schools using the Student Perception Survey, and in the expressed values of practicing physicians.16 The items also have representational validity, as pilot tests conducted during the survey-development process indicated that medical students understood these items as intended.17 The Educational Goals section of the survey asks both incoming and experienced students to rate the importance of these 16 goals on a scale ranging from 0 = “not at all important” to 4 = “absolutely essential.” The intervening scale points are labeled 1 = “slightly important,” 2 = “moderately important,” 3 = “very important.” The survey administered at the end of the second year also asks students to indicate the extent to which their medical school experience has helped them progress toward each goal. The scale for measuring progress ranges from 0 = “not at all” to 4 = “completely.” Importance. To test our first (null) hypothesis which posits little change in how students value the various educational goals, we performed paired t-tests on data from surveys administered to incoming and experienced students in the classes of 1997 through 2001, all of whom had been exposed to the new curriculum. Since we assert the null hypothesis, statistical power is an important consideration. Simply stated, the power of a test is the probability of rejecting the null hypothesis when it is indeed false. Given the large sample of matched pairs (n = 511), we chose a fairly conservative α level to avoid highlighting differences of trivial magnitude. At α =.01 (two-tailed), we have statistical power greater than.98 for detecting small to medium effect sizes.18 Progress. To test our second hypothesis, which states that the new curriculum should be associated with greater perceptions of progress toward the educational goals, we performed independent-sample t-tests on data from surveys administered to experienced students (those at the end of their second year). (One-way ANOVAs indicated that data from the classes of 1997 through 2001 could be combined because they were statistically similar. Thus, we ran t-tests to facilitate presentation and interpretation of results.) We compared the perceptions of students in the class of 1996 (n = 165), who had experienced the old curriculum, with those of students in the classes of 1997 through 2001 (n = 603). Again, the large sample size affords good statistical power. At α =.01 (two-tailed), we have statistical power greater than.80 for detecting small to medium effect sizes via these independent-sample t-tests.18 Results Importance. On average, the students rated all of the educational goals from “very important” to “absolutely essential” (see Table 1). When surveys administered at the two time points were matched and importance ratings were compared via paired t-tests, we found statistically significant, though relatively small, differences (Δ) in how the students valued four educational goals. Importance ratings increased for “become more proficient at learning on your own” (Δ =.11, p <.01) and “improve your problem solving skills” (Δ =.14, p <.001); they decreased for “become proficient in clinical decision making” (Δ = −.10, p <.001) and “become more aware of ethical issues in medicine” (Δ = −.13, p <.001). Progress. The students' mean ratings of the extent to which their experiences had helped them accomplish each goal were closer to the scale's mid-point that were the importance ratings (see Table 2). Students completing the new M1-M2 curriculum reported significantly more progress toward ten of the educational goals than did the cohort that progressed through the first two years before the new curriculum was implemented. The biggest changes were associated with “master skills for providing information to patients” (Δ =.50, p <.001), “gain a full appreciation for political, economic, and social influences on health care” (Δ =.42, p <.001), “become more comfortable when being assessed by your peers” (Δ =.35, p <.001), “become more proficient at learning on your own” (Δ =.33, p <.001), and “improve your problem solving skills” (Δ =.32, p <.001). The only decrease was associated with “master physical examination skills (Δ = −.10, ns).TABLE 2: Experienced Students' Perceived Progress toward Educational Goals* While in the Old Curriculum (Class of 1996) Versus the New Curriculum (Classes of 1997–2001), Northwestern University Medical SchoolSince distributions for some of the importance and progress items were not normal, we also ran nonparametric tests (Wilcoxon signed-ranks test for importance, Wilcoxon—Mann—Whitney test for progress). The power efficiencies of these tests are about 95% when compared with their parametric counterparts.19 We obtained exactly the same patterns of statistical significance, reinforcing the notion that parametric tests are robust when it comes to the assumptions of normality.20 Discussion A number of measures and methods (e.g., written tests, clinical skills exams, faculty reports) can provide data for assessment of students and curriculum evaluation. However, such data are relatively particular in nature. Just as clinical outcomes researchers obtain patients' perceptions to complement more objective measures of health,21 medical educators interested in the outcomes of curricular reform have gained important information by measuring students' perceptions in the areas of well-being,22 learning activity,23 learning environment,24 and long-term effects.25 This study's findings indicate the value of gauging students' perceptions regarding a variety of education goals as well. While there were statistically significant differences in importance ratings for 25% of the educational goals, there was no trend in terms of directionality. Thus, our first (null) hypothesis received general support; the value students placed on the educational goals remained relatively stable between orientation week and the end of the second year of the curriculum. As shown in Table 2, our second hypothesis, which focused on progress estimates, received general support as well. Students who had progressed through the new curriculum reported more progress toward ten of the educational goals than did students who completed the survey before the new curriculum was in place. All of the statistically significant differences in progress estimates were larger than any of the differences in importance ratings. This pattern of results was immediate9 and has been sustained over the years. It appears that the Patient, Physician & Society (PPS) course, which extends throughout the first two years, 10 contributes to increases in the students' perceived progress toward their educational goals. More specifically, the PPS course emphasizes providing information to patients, incorporates peer assessment and feedback, and explores the political, economic, and social influences on health care. We were pleased to find that, when compared with the students in the old curriculum, the students who had experienced the current M1-M2 curriculum reported more perceived progress toward the goals of becoming more proficient at learning on their own and developing skills to enhance lifelong learning. We attribute this change to the adult-learner and active-learning approach taken by all four of the M1-M2 courses. However, we did not see a similar gain in the area of identifying strengths and weaknesses in academic and clinical abilities, an important component of lifelong learning and mindful practice.26 The results suggest that we also need to focus our attention on helping students learn to manage time more effectively and understand how the stresses of life as a physician will affect their personal lives, two goals voiced by faculty who developed the new M1-M2 curriculum. Regarding the goal of developing skills for practicing health promotion and disease prevention, we are planning to move to a more clinically oriented PPS unit on health risks, in part because the students reported little increased progress in this area at the end of their second year. Finally, despite a well-received first-year unit on physical examination skills in PPS, we observed a decrease in perceived progress toward this skill set, a consistent and rather troubling finding over the years. We will continue to work toward improving students' confidence and competence in physical exam skills within the PPS course, as the first and second years of medical school offer an opportunity to ensure a consistent approach to teaching and learning basic skills. Our aim is to provide a solid foundation that can be built upon during the clerkships. While it would have been preferable to collect the Student Perception Survey's data for more than one cohort in the old curriculum, the survey could not be implemented until it was designed and tested. Still, the pattern of results is clear and consistent, and changes in progress estimates can be logically linked to changes in the curriculum. Further, results from other schools using the Student Perception Survey reinforce the findings regarding progress. For instance, progress estimates also increased at the University of Utah after a curricular revision. Interestingly, significant progress toward a similar number of goals was evident at both Northwestern and Utah, but the pattern of results (i.e., mix and magnitude of changes) differed. (We will be working with Dr. Neal Whitman and colleagues at Utah to determine the extent to which observed changes reflect the emphases of M1-M2 curricular reform at that institution.) Students' perceived progress toward their educational goals did not increase at schools that did not make substantial changes in their M1-M2 curricula during the period they have used the Student Perception Survey. Taken together, these observations highlight the generalizability and sensitivity of this approach to curriculum evaluation. The Student Perception Survey has proved a very useful tool for gauging the effects of curricular reform and identifying areas in need of more attention. We consider students' perceptions one important component of curriculum evaluation,27 and we will continue to monitor them carefully. At present, we are working to develop a questionnaire for residency program directors and another one for medical school alumni, each of which will draw on aspects of the Student Perception Survey. As noted by Gerrity and Mahaffy,5 this type of outcome data serves the important function of documenting where we have been and helping us better understand where we are going.

  • Research Article
  • Cite Count Icon 9
  • 10.3389/fpubh.2021.640204
Assessment of Global Health Education: The Role of Multiple-Choice Questions
  • Jul 22, 2021
  • Frontiers in Public Health
  • Nathan T Douthit + 9 more

Introduction: The standardization of global health education and assessment remains a significant issue among global health educators. This paper explores the role of multiple choice questions (MCQs) in global health education: whether MCQs are appropriate in written assessment of what may be perceived to be a broad curriculum packed with fewer facts than biomedical science curricula; what form the MCQs might take; what we want to test; how to select the most appropriate question format; the challenge of quality item-writing; and, which aspects of the curriculum MCQs may be used to assess.Materials and Methods: The Medical School for International Health (MSIH) global health curriculum was blue-printed by content experts and course teachers. A 30-question, 1-h examination was produced after exhaustive item writing and revision by teachers of the course. Reliability, difficulty index and discrimination were calculated and examination results were analyzed using SPSS software.Results: Twenty-nine students sat the 1-h examination. All students passed (scores above 67% - in accordance with University criteria). Twenty-three (77%) questions were found to be easy, 4 (14%) of moderate difficulty, and 3 (9%) difficult (using examinations department difficulty index calculations). Eight questions (27%) were considered discriminatory and 20 (67%) were non-discriminatory according to examinations department calculations and criteria. The reliability score was 0.27.Discussion: Our experience shows that there may be a role for single-best-option (SBO) MCQ assessment in global health education. MCQs may be written that cover the majority of the curriculum. Aspects of the curriculum may be better addressed by non-SBO format MCQs. MCQ assessment might usefully complement other forms of assessment that assess skills, attitude and behavior. Preparation of effective MCQs is an exhaustive process, but high quality MCQs in global health may serve as an important driver of learning.

  • Research Article
  • Cite Count Icon 58
  • 10.1002/ca.22494
Impact of the clinical ultrasound elective course on retention of anatomical knowledge by second-year medical students in preparation for board exams.
  • Dec 22, 2014
  • Clinical Anatomy
  • Peter Kondrashov + 4 more

Ultrasound has been integrated into a gross anatomy course taught during the first year at an osteopathic medical school. A clinical ultrasound elective course was developed to continue ultrasound training during the second year of medical school. The purpose of this study was to evaluate the impact of this elective course on the understanding of normal anatomy by second-year students. An anatomy exam was administered to students enrolled in the clinical ultrasound elective course before the start of the course and after its conclusion. Wilcoxon signed ranks tests were used to determine whether exam scores changed from the pre-test to the post-test. Scores from two classes of second-year students were analyzed. Students who took the elective course showed significant improvement in the overall anatomy exam score between the pre-test and post-test (P < 0.001). Scores for exam questions pertaining to the heart, abdomen, upper extremity, and lower extremity also significantly improved from the pretest to post-test (P < 0.001), but scores for the neck and eye showed no significant improvement. The clinical ultrasound elective course offered during the second year of medical school provided students with an important review of key anatomical concepts while preparing them for board exams. Our results suggested that more emphasis should be placed on head and neck ultrasound to improve student performance in those areas. Musculoskeletal, abdominal, and heart ultrasound labs were more successful for retaining relevant anatomical information.

  • Research Article
  • Cite Count Icon 1
  • 10.17161/kjm.vol15.17939
The Influence of the STORM Program and Other Elective Experiences During the Summer Between the First and Second Year on Medical Students' Career Interests.
  • Sep 21, 2022
  • Kansas journal of medicine
  • Mara Cunningham + 2 more

IntroductionThe purpose of this study was to investigate the influence of the Summer Training Option in Rural Medicine (STORM) program and other elective experiences during the summer between the first and second pre-clerkship years of medical school on medical students’ career intentions.MethodsA retrospective voluntary and anonymous cohort study was conducted by distributing an email survey to the 211 second-year medical students at the University of Kansas School of Medicine (KUSM). The survey consisted of a variety of questions regarding their recent summer break elective experiences.ResultsEighty-nine students (42.2% response rate) completed the survey; 21 respondents participated in the STORM program. Important factors influencing the choice of an elective included, working one-on-one with an educator, hands-on experiences, and receiving academic credit. Sixty-seven respondents (75.3%) concluded that their experience met their expectations, 50 (56.2%) concluded that their experience helped solidify their career goals, while 20 (22.5%) concluded that their experience made them question their career goals. Eleven respondents (12.4%) wished they had participated in a different summer experience, and 16 respondents (18.0%) changed their career plans after their summer experience.ConclusionsA break between first and second years of medical school allowed students to explore career options; such experiences may ignite a particular passion, solidify an already determined specialty choice, or dissuade a student from pursuing a particular career pathway. Medical school affirmation of the importance of significant, sustained, and student-chosen opportunities to work one-on-one with a mentor and engage in hands-on learning during the pre-clerkship years is crucial. The STORM program was one elective option that delivered on students’ expectations.

  • Research Article
  • Cite Count Icon 7
  • 10.1080/13691457.2012.691873
Re-engineering the multiple choice question exam for social work
  • Sep 1, 2013
  • European Journal of Social Work
  • Gavin Heron + 1 more

The aim of this study is to devise a multiple choice question (MCQ) exam that provides students with opportunities to engage in a deep approach to learning. Multiple choice assessment is largely unused in social work degree courses in the UK because of associations with techniques such as guessing and rote learning, which do not correspond with deep approaches to learning. Strategies used to enhance opportunities for a deep approach to learning within the MCQ exam used in this study included certainty-based marking (CBM), enhancing the use of formative feedback and giving students responsibility for devising the MCQs. Results show that students use similar levels of deep learning when they completed a MCQ exam compared to those students who completed an essay exam. The deep learning approach for the MCQ exam was, however, less when compared to a different module that used an essay assignment. There is an increasing pressure on Higher Education to provide more robust assessment practices, and findings in this study suggest it may be time for social work tutors to reconsider the role of the MCQ format within the existing range of assessment tools.

  • Research Article
  • Cite Count Icon 64
  • 10.1016/j.stueduc.2013.07.001
Scoring methods for multiple choice assessment in higher education – Is it still a matter of number right scoring or negative marking?
  • Aug 13, 2013
  • Studies in Educational Evaluation
  • Ellen Lesage + 2 more

Scoring methods for multiple choice assessment in higher education – Is it still a matter of number right scoring or negative marking?

  • Research Article
  • Cite Count Icon 19
  • 10.4103/0019-5359.95934
Comparative assessment of multiple choice questions versus short essay questions in pharmacology examinations
  • Jan 1, 2010
  • Indian Journal of Medical Sciences
  • Amomin Mujeeb + 2 more

This retrospective study compared the performance of medical students in multiple choice questions (MCQs) and short essay questions (SEQs). During the 3 year analysis, 533 students had an average score of 51.34% (SD 9.9) in the SEQ and 64.71%(SD 9.9)in the MCQs. Regression analysis showed a significant correlation(r=0.64, P<0.01) between MCQs and SEQs. When student performance was grouped by final course grade, a statistically significant correlation between MCQs and SEQs scores existed only for the 405 students who received a passing grade (r=0.21, P<0.01). The MCQ and SEQ scores were not correlated for the 128 students who failed (r=0.11, P=0.08) or for 70 students who achieved distinctions (r=-0.27, P=0.13). MCQ scores were significantly higher (P<0.01) than SEQ for each of the groups when analyzed by the two-way ANOVA test. The result of this study suggests that for most students, the strong correlation between MCQ and SEQ indicates that student performance was independent of testing format. For students at either end of the performance spectrum, the lack of correlation suggests that the performance in one of the testing format had a strong influence on the final course grade. In addition, those students who failed the course were likely to be weak in both testing modalities, whereas students in all grade groups were more likely to perform better in the MCQs than SEQs.

  • Research Article
  • Cite Count Icon 43
  • 10.1097/acm.0000000000000935
From Impairment to Empowerment: A Longitudinal Medical School Curriculum on Disabilities.
  • Jul 1, 2016
  • Academic Medicine
  • Cristina Sarmiento + 4 more

All physicians will care for individuals with disabilities; however, education about disabilities is lacking at most medical schools. Most of the schools that do include such education exclusively teach the medical model, in which disability is viewed as an impairment to be overcome. Disability advocates contest this approach because it overlooks the social and societal contexts of disability. A collaboration between individuals with disabilities, educators, and physicians to design a medical school curriculum on disabilities could overcome these differences. A curriculum on disabilities for first- and second-year medical students was developed during the 2013-2014 academic year and involved a major collaboration between a medical student, medical educators, disability advocates, and academic disability specialists. The guiding principle of the project was the Disability Rights Movement motto, "Nothing about us without us." Two small-group sessions were created, one for each medical school class. They included discussions about different models of disability, video and in-person narratives of individuals with disabilities, and explorations of concepts central to social perceptions of disability, such as power relationships, naming and stigmatization, and disability as identity. According to evaluations conducted after each session, students reported positive feedback about both sessions. Through this curriculum, first- and second-year medical students learned about the obstacles faced by individuals with disabilities and became better equipped to understand and address the concerns, hopes, and societal challenges of their future patients. This inclusive approach may be used to design additional curricula about disabilities for the clinical and postgraduate years.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.