Abstract

How is orthographic knowledge acquired? In line with the self-teaching hypothesis, most computational models assume that phonological recoding has a pivotal role in orthographic learning. However, these models make simplifying assumptions on the mechanisms involved in visuo-orthographic processing. Against evidence from eye movement data during orthographic learning, they assume that orthographic information on novel words is immediately available and accurately encoded after a single exposure. In this paper, we describe BRAID-Learn, a new computational model of orthographic learning. BRAID-Learn is a probabilistic and hierarchical model that incorporates the mechanisms of visual acuity, lateral interference, and visual attention involved in word recognition. Orthographic learning in the model rests on three main mechanisms: first, visual attention moves over the input string to optimize the gain of information on letter identity at each fixation; second, top-down lexical influence is modulated as a function of stimulus familiarity; third, after exploration, perceived information is used to create a new orthographic representation or stabilize a better-specified representation of the input word. BRAID-Learn was challenged on its capacity to simulate the eye movement patterns reported in humans during incidental orthographic learning. In line with the behavioral data, the model predicts a larger decline with exposures in number of fixations and processing time for novel words than for known words. For novel words, most changes occur between the first and second exposure, that is to say, after creation in memory of a new orthographic representation. Beyond phonological recoding, our results suggest that visuo-attentional exploration is an intrinsic portion of orthographic learning seldom taken into consideration by models or theoretical accounts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.