Abstract

In a prior review, Perrruchet and Pacton (2006) noted that the literature on implicit learning and the more recent studies on statistical learning focused on the same phenomena, namely the domain-general learning mechanisms acting in incidental, unsupervised learning situations. However, they also noted that implicit learning and statistical learning research favored different interpretations, focusing on the selection of chunks and the computation of transitional probabilities aimed at discovering chunk boundaries, respectively. This paper examines the state of the debate 12years later. The link between contrasting theories and their historical roots has disappeared, but a number of studies were aimed at contrasting the predictions of these two approaches. Overall, these studies strongly question the still prevalent account based on the statistical computation of pairwise associations. Various chunk-based models provide much better predictions in a number of experimental situations. However, these models rely on very different conceptual frameworks, as illustrated by a comparison between Bayesian models of word segmentation, PARSER, and a connectionist model (TRACX).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call