Efficient visual word recognition presumably relies on orthographic prediction error (oPE) representations. On the basis of a transparent neurocognitive computational model rooted in the principles of the predictive coding framework, we postulated that readers optimize their percept by removing redundant visual signals, allowing them to focus on the informative aspects of the sensory input (i.e., the oPE). Here, we explore alternative oPE implementations, testing whether increased precision by assuming all-or-nothing signaling and more realistic word lexicons results in adequate representations underlying efficient word recognition. We used behavioral and electrophysiological data (i.e., EEG) for model evaluation. More precise oPE representations (i.e., implementing a binary signaling and a frequency-sorted lexicon with the 500 most common five-letter words) explained variance in behavioral responses and electrophysiological data 300 msec after stimulus onset best. The original less-precise oPE representation still best explains early brain activation. This pattern suggests a dynamic adaption of represented visual-orthographic information, where initial graded prediction errors convert into binary representations, allowing accurate retrieval of word meaning. These results offer a neuro-cognitive plausible account of efficient word recognition, emphasizing visual-orthographic information in the form of prediction error representations central to the transition from perceptual processing to the access of word meaning.
Read full abstract