Abstract

As scientific models of student thinking, learning progressions (LPs) have been evaluated in terms of one important, but limited, criterion: fit to empirical data. We argue that LPs are not empirically adequate, largely because they rely on problematic assumptions of theory-like coherence in students’ thinking. Through an empirical investigation of physics teachers’ interactions with an LP-based score report, we investigate 2 other criteria of good models: utility and generativity. When interacting with LP-based materials, teachers often adopted finer-grained perspectives (in contrast to the levels-based perspective of the LP itself) and used these finer-grained perspectives to formulate more specific, actionable instructional ideas than when they reasoned in terms of LP levels. However, although teachers did not use the LP-based materials in ways envisioned by LP researchers, the teachers’ interactions with the score reports embodied how philosophers envision the fruitful use of good models of dynamic, complex systems. In particular, teachers took a skeptical, inquiring stance toward the LP, using it as an oversimplified starting place for generating and testing hypotheses about student thinking and using concepts from the model in ways that moved beyond the knowledge available in the LP. Thus, despite—and perhaps even because of—their empirical inadequacy, LPs have the potential to serve teachers as productive models in ways not envisioned by LP researchers: as tools for knowledge generation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call