Abstract

Most current models of word naming are restricted to processing monosyllabic words and pseudowords. This limitation stems from difficulties in representing the orthographic and phonological codes for words varying substantially in length. Sibley, Kello, Plaut, and Elman (2008) described an extension of the simple recurrent network architecture, called the sequence encoder, that learned orthographic and phonological representations of variable-length words. The present research explored the use of sequence encoders in models of monosyllabic and bisyllabic word naming. Performance in these models is comparable to other models in terms of word and pseudoword naming accuracy, as well as accounting for naming latency phenomena. Although the models do not address all naming phenomena, the results suggest that sequence encoders can learn orthographic and phonological representations, making it easier to create models that scale up to larger vocabularies, while accounting for behavioural data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.