Abstract

In this series of Making Sense of Methods and Measurement columns, we will look at strategies for assessing differences between and among group means: t tests, analysis of variance, and multivariate analysis of variance. Each of these analytic strategies allows us to infer whether the differences we observe between and among groups are likely because of chance or the variable (intervention) of interestdsuch as a teaching strategy, like simulation. In educational research, we often want to compare some characteristic between two groups. For example, we may want to compare knowledge between an intervention group of students, who participated in an innovative learning activity such as simulation, and a control group of students, who participated in a traditional learning activity such as lecture. To assess the students’ knowledge after the intervention, we may administer a multiple choice examination. We can then calculate the mean examination score of the intervention (simulation) group and the mean examination score of the control (lecture) group for comparison. One strategy for comparing the mean examination scores between the groups of students is to administer the ‘‘eyeball test’’. Without doing any further analysis, we could see if one group’s examination scores were higher than the other group’s examination scores. From this, we could conclude that the teaching strategy that the group with the higher examination scores was exposed to must have been the most effective teaching strategy, or at least the one that produced higher examination scores, right? Wrong. Statistically, the students in each group are considered a random sample from the population of students. Therefore, the differences in examination score we observe could be the result of sampling error or chance. Beyond just observing for differences in mean examination scores, we need to answer the question, ‘‘What is the probability that the observed difference in mean examination scores between the simulation and lecture groups was because of chance?’’ If the probability is sufficiently small, we may conclude that the observed difference is not likely because of chance and that the teaching strategy likely affected the examination scores (Norman & Streiner, 2003). Let’s take a look at the sample statistics in the Table. Using the ‘‘eyeball test,’’ we could conclude that the students who participated in the simulation learning activity, on average, earned higher examination scores than the students who participated in the lecture-based learning activity. However, using an independent (unpaired) samples t test, we can take into account important factors such as the sample size and within-group variability (standard deviation) to determine whether the difference between the two means is statistically significant. Remember, we are looking for differences between groups, but there are important differences within the groups that must also be considered. An independent t test is calculated when the two groups are different, meaning different people. The two groups may have received the same treatment, but they are two separate groups of subjects. If we take this one step further, we may want to compare pre- and postintervention examination scores between the two groups of students, one that participated in a simulation-based learning activity, and one that participated in lecture-based learning activity. In this case, you would calculate the difference scores (between preintervention examination scores and postintervention examination

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call