Abstract

The extent to which grades in the first few weeks of a course can predict overall performance can be quite valuable in identifying at-risk students, informing interventions for such students, and offering valuable feedback to educators on the impact of instruction on learning. Yet, research on the validity of such predictions that are made by machine learning algorithms is scarce at best. The present research examined two interrelated questions: To what extent can educators rely on early performance to predict students’ poor course grades at the end of the semester? Are predictions sensitive to the mode of instruction adopted (online versus face-to-face) and the course taught by the educator? In our research, we selected a sample of courses that were representative of the general education curriculum to ensure the inclusion of students from a variety of academic majors. The grades on the first test and assignment (early formative assessment measures) were used to identify students whose course performance at the end of the semester would be considered poor. Overall, the predictive validity of the early assessment measures was found to be meager, particularly so for online courses. However, exceptions were uncovered, each reflecting a particular combination of instructional mode and course. These findings suggest that changes to some of the currently used formative assessment measures are warranted to enhance their sensitivity to course demands and thus their usefulness to both students and instructors as feedback tools. The feasibility of a grade prediction application in general education courses, which critically depends on the accuracy of such tools, is discussed, including the challenges and potential benefits.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call