Abstract

ABSTRACT Language learning often involves predicting categorical outcomes based on a set of cues. Error in predicting a categorical outcome is the difference between zero or one and the outcome’s current level of activation. The current activation level of a categorical outcome is argued to be a non-linear, logistic function of activation the outcome receives from the cues. Crucially, the logistic activation function asymptotically approaches zero and one without ever reaching or overshooting them. This allows error-driven learning to avoid settling on spurious associations between cues and outcomes that never co-occur (“spurious excitement”). In an artificial language experiment, humans are also not observed to show spurious excitement. The logistic activation function is compared to alternative solutions to spurious excitement, and shown to have important advantages. It enables one-shot learning and steep, S-shaped learning curves, and explains why cue competition in language learning can be overcome with additional training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call