Abstract

Two points are made in relation to the recent article by K. Kim and M. Glanzer (1993). First, the attention-likelihood model is more complex than these authors and others suggest. In particular, 2 kinds of quantities ― (a) parameters representing the true state of the subject's memory and (b) the subject's estimates of those parameters-have been referred to using the same symbols. This obscures the essential role of metamemory in the model's predictions. Second, log-likelihood rescaling is not needed to explain the mirror effect. An alternative rescaling scheme is described, which can be added to a variety of memory models. This new rescaling method estimates a test item's learnability by learning it. Simulations shows that the method is consistent with Kim and Glanzer's experimental results

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call