Abstract

A recurrent neural network model of phonological pattern learning is proposed. The model is a relatively simple neural network with one recurrent layer, and displays biases in learning that mimic observed biases in human learning. Single-feature patterns are learned faster than two-feature patterns, and vowel or consonant-only patterns are learned faster than patterns involving vowels and consonants, mimicking the results of laboratory learning experiments. In non-recurrent models, capturing these biases requires the use of alpha features or some other representation of repeated features, but with a recurrent neural network, these elaborations are not necessary.

Highlights

  • Models of phonological pattern learning typically require large numbers of constraints or rules on where features can occur, and the presence of alpha features or some other representation of repeated features to allow certain patterns to be learned more quickly (Hayes and Wilson, 2008; Moreton et al, 2015)

  • This paper describes a simple recurrent neural network model of phonological pattern learning that is biased towards learning single-feature patterns and patterns over only consonants or vowels without using alpha features, separate representations of consonants and vowels, or conjunctive constraints

  • Non-recurrent neural network models such as the single-layer perceptron require a representation of repeated features to allow single-feature patterns to be learned more (Moreton, 2012), the addition of a recurrent layer seems to have the same effect

Read more

Summary

Introduction

Models of phonological pattern learning typically require large numbers of constraints or rules on where features can occur, and the presence of alpha features or some other representation of repeated features to allow certain patterns to be learned more quickly (Hayes and Wilson, 2008; Moreton et al, 2015). Moreton, Pater, and Pertsova (2015) describe a cue-based learning model that uses these conjunctive constraints Their model is a maximum entropy model trained by gradient descent on negative log-likelihood, and is related to the singlelayer perceptron. It successfully models the biases found in human phonological learning experiments, but still requires listing all possible constraint conjuncions in the input. This paper describes a simple recurrent neural network model of phonological pattern learning that is biased towards learning single-feature patterns and patterns over only consonants or vowels without using alpha features, separate representations of consonants and vowels, or conjunctive constraints. More complex patterns or patterns requiring more features will likely require a larger number of neurons in this layer

Patterns
Results
Discussion and conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call