This article evaluates the predictions of an algorithmic-level distributed associative memory model as it introduces, propagates, and resolves ambiguity, and compares it to the predictions of computational-level parallel parsing models in which ambiguous analyses are accounted separately in discrete distributions. By superposing activation patterns that serve as cues to other activation patterns, the model is able to maintain multiple syntactically complex analyses superposed in a finite working memory, propagate this ambiguity through multiple intervening words, then resolve this ambiguity in a way that produces a measurable predictor that is proportional to the log conditional probability of the disambiguating word given its context, marginalizing over all remaining analyses. The results are indeed consistent in cases of complex structural ambiguity with computational-level parallel parsing models producing this same probability as a predictor, which have been shown reliably to predict human reading times.
Read full abstract