Abstract

Interest in and use of experimental and quasi-experimental studies has increased among medical educators. These studies may be focused on a cause and effect relationship if they are rigorously conducted. Put another way; the medical education researcher manipulates or controls the independent variable (cause) in order to evaluate its impact on the dependent variable or outcome (effect). It is difficult to establish a cause and effect relationship in medical education research because the researcher is unable to control all covariables (confounding/intervening variables) that can influence the outcome of the study.  Campbell and Stanley described confounding variables as threats to internal and external validity.1 Internal validity refers to the degree to which changes in the outcome(s) (the dependent variable(s)) of the study can be accounted for by the independent variable(s).  Factors that may be considered as threats to internal validity are not part of the independent variables in an experimental study, but they can have a significant effect on the dependent variable (s) (outcome). Indeed, these factors may account for the results of the study, not the independent variable (s) (intervention (s)) of interest. External validity is focused on the extent to which the results of the study can be generalised to the target population. Threats to internal and external validity can undermine the quality of a meta-analysis which is grounded in a systematic review of the relevant literature.   The methodological quality of experimental studies in medical education research may be in error. Given the possibility of methodological errors in experimental or quasi-experimental studies, authors of meta-analyses should first critically appraise the quality of all relevant studies comprised in the meta-analysis. For further discussion of criteria for experimental and quasi-experimental studies, I refer the interested readers to more extended discussions of quantitative and qualitative methods in medical education research.2,3 Intervention effect One of the criteria for conducting an experimental study is to manipulate the experimental independent variable. By manipulating the independent variable, we mean that the researcher controls the independent or experimental variable to evaluate its impact on the dependent variable (s) (outcome(s)). In medical education research, the experimental variable is typically an education intervention, for instance, the impact of simulation-based education on the development of clinical reasoning or the performance of medical students. The researcher conducting an experimental study manipulates the intervention of interest (e.g., simulation-based training) by administrating it to some students (the experimental group) and not to other students (control group). The control group usually receives a routine intervention, e.g., the usual teaching. After randomly assigning students into two groups, the experimental and the control groups, both groups take a pre-test as a basis for comparison of their performance on the pre-test with a post-test, which is given after the experimental group receives the intervention of interest and the control group receives a routine treatment. Using a measurement instrument, to assess the performance of both groups before and after the intervention and the routine treatment, is a pre-test-post-test control group design. If we assume that the collected data for measuring the performance of students is a continuous measure, we calculate the means and standard deviation of student performance. Using inferential statistics, the researcher is able to determine the impact of the intervention of interest on the performance of students. The fundamental data analysis is to calculate effect size indices, which inform us about the magnitude of the effect of an intervention (e.g., simulation) on particular outcomes (e.g., student performance). The effect size indicates the magnitude of differences in two means, e.g., the difference between interventional and control group means on student performance. Effect sizes indicate whether the differences are important. Effect sizes are essential for conducting meta-analyses. Sometimes the outcomes of studies with experimental designs are dichotomous and meta-analysts use the Odds Ratios (OR) or Risk Ratios (RR). In non-experimental studies, they may use the Pearson correlation coefficient (i.e., Pearson’s r) to show the strength and direction of an effect. The purpose of this introductory guide is to show how a meta-analysis works in the context of medical education research using experimental studies. The purpose of this article is to introduce standards and methods for meta-analysis for experimental studies in order to synthesise data from the primary research studies using meta-analysis and related statistics.  This paper does not deal with the primary steps of a meta-analysis, i.e., how to address a problem, how to design a meta-analysis, how to appraise the quality of primary research studies, and how to extract and code data for analysis. Once these steps are completed, the meta-analysis is performed. Interested readers may refer to systematic review texts to conduct a thorough meta-analysis.

Highlights

  • One of the criteria for conducting an experimental study is to manipulate the experimental independent variable

  • The effect size indicates the magnitude of differences in two means, e.g., the difference between interventional and control group means on student performance

  • To describe the forest plot, suppose that we have systematically reviewed 12 articles to investigate the effect of simulation-based training on student performance

Read more

Summary

Mohsen Tavakol

Interest in and use of experimental and quasi-experimental studies has increased among medical educators These studies may be focused on a cause and effect relationship if they are rigorously conducted. Factors that may be considered as threats to internal validity are not part of the independent variables in an experimental study, but they can have a significant effect on the dependent variable (s) (outcome). These factors may account for the results of the study, not the independent variable (s) (intervention (s)) of interest. For further discussion of criteria for experimental and quasi-experimental studies, I refer the interested readers to more extended discussions of quantitative and qualitative methods in medical education research.[2,3]

Intervention effect
Heterogeneity in effect sizes
Forest plot
Description of the forest plot
Conclusions
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call