Abstract

All in all, the evidence is not convincing. Only four of the nine randomized studies used the conventional small-group learning paradigm and qualify as studies of small-group learning, which are relevant to medical education. The results of one of the four are impossible to interpret because of the involvement of the investigator in teaching and test construction. The three remaining studies showed no effect, a negative effect, and a positive effect, respectively. The nonrandomized studies failed to establish the comparability of the groups. The evidence does not support the authors' call for "more widespread implementation of small-group learning in undergraduate SMET". Small-group learning has not been shown to support the acquisition of content any better [or worse] than large-group learning. In medical education, small-groups are employed in large part to develop team work skills, communication skills, and peer- and self-assessment skills. But these outcomes are not addressed in this meta-analysis. More seriously, our rereading of these studies raises general concerns about meta-analysis in education, which have important implications for evidence-based medical education. The meta-analysis under discussion at first appeared to be just the kind needed to guide an evidence-based educational enterprise. However, a closer look revealed both what is lacking in the meta-analysis and some of the ways educational research and reporting need to be changed if anything like evidence-based education is ever to become a reality. At the least, study design must be clearly described. In addition, if the design is nonrandomized, the groups should be described in sufficient detail to allow a meaningful interpretation of the role of preexisting differences on the outcome measures. (This is why we limited our discussion here to the randomized studies.) Also, effect-size measures should be reported for all comparisons that bear on the impact of the intervention, including preexisting differences. Reporting significance is not enough. This shows only whether sampling error can be ruled out (with a low probability of error, p < .05) as a possible explanation of the connection between the intervention and the outcome. The effect can still be trivial and the comparisons confounded. In addition, descriptions of the actual educational interventions employed need to be more comprehensive and precise. For the most part, the papers would have been strengthened by providing more information for replicating the studies and for deciding which should be included in a given meta-analysis. Perhaps most seriously, our rereading of these studies makes us wonder about the possibility of meaningfully synthesizing the results of educational studies, given their idiosyncrasies and their many extraneous, uncontrolled factors. The conclusions from most educational studies, then--whether randomized or not--must be highly qualified, with explicit warnings about preexisting differences and other confounding factors that plausibly account for the study results. However, these narrative qualifications do nothing to adjust the effect-size measures, which are typically pooled or synthesized across studies--confounds and all. The idiosyncrasies of the studies seem to preclude a blanket qualification that can be applied conceptually across the collection of studies to arrive at a sound conclusion from the synthesis. In brief, the meta-analysis considered here does not support the application of small-group learning in medical education and it raises questions about meta-analysis in education with implications for evidence-based education.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call