Abstract

In this issue, the Research Report, entitled Results of an Online Refresher Course to Build Braille Transcription Skills in Professionals and Volunteers, by Herzberg and Rosenblum, uses pretest and posttest scores on a 20-item multiple-choice test to compare the participants' knowledge before and after taking an online course. In this Statistical Sidebar, I will discuss the realm of pretest-posttest measures as a way to assess change. This approach is used frequently in scholarly research, given the logic that measuring performance or knowledge before some sort of training and then measuring it again after the instruction should provide an idea of the effectiveness of the intervention. The method of using pretest-posttest measures, however, can be subject to a number of subtle flaws. As in any experimental random assignment of participants to experimental and control groups makes the study stronger (for more information on this topic, please see the Statistical Sidebar, Control Groups and Experimental Groups: It Is All in the Numbers, which was published in the January-February 2015 issue). Unfortunately, it is not always possible to use random assignment, and even when participants are randomly placed in experimental and control groups, the pretest-posttest design is subject to threats to internal validity. The particular type of pretest-posttest design used in this report was a one group pre-posttest design, in which there was no control group and, therefore, no random assignment of participants. Although the pretest-posttest model is quite common, this type of research design is open to threats of history, maturation, testing, and instrumentation. Since time passes between the pretest and posttest measures, participants experience other events in the interim. This threat of history means that participants might be influenced by factors outside of the study that affect their performance on the posttest. If enough time passes between the pretest and posttest, then the threat of maturation might be a factor, in which biological changes in the participants might influence posttest scores. There is also the possibility that, since the pretest and posttest are measuring the same knowledge or skill, the act of taking the pretest elevates knowledge or performance, meaning that the training is not fully responsible for the measured change. If the exact same assessment is used for both the pretest and the posttest, when participants take the posttest, they have already answered the questions or performed the skills it requires, which might artificially inflate performance on the posttest. …

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.