Abstract
Research has been conducted on learning and memory since the late 1800s. Recently, there is increasing documentation on various instructional strategies in authentic classroom settings. The instructional approaches an instructor chooses is a function of the instructor (eg, comfort, experience), the students, the criterial tasks the students need to accomplish, and the content. Selecting the most appropriate intervention is complicated by costs associated with the instructional intervention. However, we often do not study or report the costs involved with the interventions we publish. In formulary management, we conduct cost-benefit analysis to find the best therapies.1 Often, plots are constructed with the ordinate as cost, with increasing cost on the top, and effectiveness on the abscissa, with increasing effectiveness moving toward the right. The resulting graph contains 4 quadrants with which to assess potential solutions. In the upper left quadrant are the ill-favored, more costly, less effective solutions. Conversely, the bottom right quadrant contains the preferred, less costly, more effective solutions. The other two quadrants can represent harder decisions because their contents depend on how much cost are we willing to spend for a relative gain. One must ask if this can also be done for instructional interventions. As a mental exercise, I developed a sample cost-effectiveness analysis for educational interventions (Figure 1). The data derived for this graph come from rough estimates of cost from faculty members across disciplines at a single university. Effect sizes are from the literature based on academic performance. 2-10 I also use an effect size as 0.40 as the benchmark for a good educational intervention. 3 The graph shows activities that help with retrieval (eg, questioning, clickers) are low-cost but have good effect sizes. Cooperative learning strategies vary from think-pair-share to team-based learning, so costs can vary, but generally cooperative learning has large effect sizes. Feedback is similarly associated with large effect sizes, but the cost can vary depending on the type, frequency, or quality of the feedback. The final area is instructional technology (eg, videos, animation, PowerPoint). Technology can do great things—facilitate interaction and combine visual information with auditory information—but traditionally can come at a large cost. Those costs can include script writing time, software purchases, editing, revising, or recording time, and technology updates. In the figure, technology occupies the spaces on the diagonal where there are increased costs and increased effect, thus requiring more judgment on whether to implement these strategies. Figure 1. Example of a cost-effectiveness analysis for various instructional approaches. “Dominated” refers to the new strategy being inferior to the older strategy because of higher cost and less effect. “Dominant” refers to the ... To generate such graphs we need to consider both effectiveness and cost. Effectiveness can be measured through examination scores, course grades, students’ attitudes or confidence, or performance in clinical practice experience. Scores from examinations or quizzes are straightforward but may only assess short-term knowledge retention. Student attitudes or engagement might be a secondary outcome if the primary goal is to improve learning. These outcomes can be challenging as they may decrease in active learning environments even as performance increases.11 Confidence judgments are complex and may not represent actual knowledge. 12 Clinical performance is the most relevant measure but it is challenging to link a single curricular intervention (ie, course change, active learning strategy) to the outcome (ie, advanced pharmacy practice experience (APPE) grade). We also have the issue of the comparison group. It has been shown for decades that active engagement can lead to better learning outcomes than the lecture format can, but we often use lecture as the control. To get a real sense of flipped classrooms, we should compare them to interactive lecture courses (ie, lecture with active learning) or compare active learning strategies with other active learning strategies. This would be equivalent to comparing a new medication to the current standard therapy and not to an outdated standard. Most of the literature has focused on effect but we rarely address the costs. There are many elements involved in considering cost such as faculty members’ time to develop the course and material, use of instructional designers, licenses, and software, and re-usability or sustainability of the material or resources. The first issue is determining how to combine time costs and financial costs into one measure. The second issue, then, is keeping accurate records of how much time we spend on these tasks. We need to do a better job in our research efforts to document costs and outcomes, and maybe start to look at relative comparisons (ie, active learning to active learning) more than absolute comparisons (eg, active learning to lecture). Lack of faculty time is always a barrier for improving teaching, and with dwindling financial resources, we need to be cost-efficient.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.