Abstract

Many researchers in the field of implicit statistical learning agree that there does not exist one general implicit learning mechanism, but rather, that implicit learning takes place in highly specialized encapsulated modules. However, the exact representational content of these modules is still under debate. While there is ample evidence for a distinction between modalities (e.g., visual, auditory perception), the representational content of the modules might even be distinguished by features within the same modalities (e.g., location, color, and shape within the visual modality). In implicit sequence learning, there is evidence for the latter hypothesis, as a stimulus-color sequence can be learned concurrently with a stimulus-location sequence. Our aim was to test whether this also holds true for non-spatial features within the visual modality. This has been shown in artificial grammar learning, but not yet in implicit sequence learning. Hence, in Experiment 1, we replicated an artificial grammar learning experiment of Conway and Christiansen (2006) in which participants were supposed to learn color and shape grammars concurrently. In Experiment 2, we investigated concurrent learning of sequences with an implicit sequence learning paradigm: the serial reaction time task. Here, we found evidence for concurrent learning of two sequences, a color and shape sequence. Overall, the findings converge to the assumption that implicit learning might be based on features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call