Abstract

According to noun-cue models, arbitrary linguistic categories, like those associated with case and gender systems, are difficult to learn unless members of the target category (i.e., nouns) are marked with phonological or semantic cues that reliably co-occur with grammatical morphemes (e.g., determiners) that exemplify the categories. Syntactic context models do not require noun cues. Five experiments used an artificial grammar with gender-like noun subcategories and locative postpositions with gender agreement to test the hypothesis that syntactic context models were sufficient for category induction if they included processes for drawing learners’ attention to the related subsets of grammatical morphemes that defined the gender categories. Experiment 1 validated a computer-based experimental paradigm for artificial language learning. Experiment 2 showed that direct instruction was one way to draw learners’ attention to the defining morphemes and bring about category induction. Experiments 3–5 showed that blocking learning trials using nouns as the blocking factor drew learners’ attention to the correlated subsets of grammatical morphemes and led to category induction. Experiments 2–5 provided support for syntactic context models under specific learning conditions. The implications for first- and second-language learning are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call