Abstract

This paper reviews knowledge surveys as a best practice in assessment and illustrates how this assessment tool was used to compare teaching methods and its value to students during a 5-year study. The goal was to improve assessment, active learning, and course design. On each survey, students rated a type of confidence known as self-efficacy before and after instruction, used the survey as a study guide during instruction, and rated its value at the end of the course. Results showed gains in self-efficacy (p<.001), high value for the survey experience, and differences in scores across teaching methods (p<.001).

Highlights

  • The instructor and instructional designer began reviewing several areas of research that seemed relevant to student engagement and motivation. These areas included the rationale for completing an instructional task analysis (Feldon & Stowe, 2009; Smith & Ragan, 2005); use of knowledge surveys in formative assessment (Nuhfer & Knipp, 2003; Wirth & Perkins, 2005); how formative evaluation can help improve instruction (Shepard, 2005, Dunn & Mulvenon, 2009; Herman, 2013); and Keller’s ARCS motivation model with its elements of attention, confidence, relevance, and satisfaction (Keller, 2000, 2010)

  • To determine if there were differences in students’ value ratings for the survey across teaching methods, pairwise comparisons were made with the Kruskal-Wallis test, which is a nonparametric equivalent for a one-way analysis of variance (ANOVA)

  • Pairwise comparisons based on Mann-Whitney tests, showed higher value ratings in Year 5 (POGIL added) than in the two previous years―Year 3 and Year 4

Read more

Summary

Introduction

Most students in the senior-level course planned to become construction site managers, a career path in which the theme for success was “If you don’t know the dirt, you’ll lose your shirt.” Yet, they could barely care about learning how to use soils as construction material. The instructor and instructional designer began reviewing several areas of research that seemed relevant to student engagement and motivation These areas included the rationale for completing an instructional task analysis (Feldon & Stowe, 2009; Smith & Ragan, 2005); use of knowledge surveys in formative assessment (Nuhfer & Knipp, 2003; Wirth & Perkins, 2005); how formative evaluation can help improve instruction (Shepard, 2005, Dunn & Mulvenon, 2009; Herman, 2013); and Keller’s ARCS motivation model with its elements of attention, confidence, relevance, and satisfaction (Keller, 2000, 2010). The instructor took the same questions from the survey and added space to write answers under each question, added “Pretest” as the title, and provided directions for how to complete the pretest

Classroom Procedures
Results
Discussion
A Catalyst for Scholarship

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.