When children learn to add, they count on their fingers, beginning with the simple SUM strategy and gradually developing the more sophisticated and efficient MIN strategy. The shift from SUM to MIN provides an ideal domain for the study of naturally occurring discovery processes in cognitive skill acquisition. The SUM-to-MIN transition poses a number of challenges for machine-learning systems that would model the phenomenon. First, in addition to the SUM and MIN strategies, Siegler and Jenkins (1989) found that children exhibit two transitional strategies, but not a strategy proposed by an earlier model. Second, they found that children do not invent the MIN strategy in response to impasses, or gaps in their knowledge. Rather, MIN develops spontaneously and gradually replaces earlier strategies. Third, intricate structural differences between the SUM and MIN strategies make it difficult, if not impossible, for standard, symbol-level machine-learning algorithms to model the transition. We present a computer model, called GIPS, that meets these challenges. GIPS combines a relatively simple algorithm for problem solving with a probabilistic learning algorithm that performs symbol-level and knowledge-level learning, both in the presence and absence of impasses. In addition, GIPS makes psychologically plausible demands on local processing and memory. Most importantly, the system successfully models the shift from SUM to MIN, as well as the two transitional strategies found by Siegler and Jenkins.
Read full abstract