Abstract

Faces and words are traditionally assumed to be independently processed. Dyslexia is also traditionally thought to be a non-visual deficit. Counter to both ideas, face perception deficits in dyslexia have been reported. Others report no such deficits. We sought to resolve this discrepancy. 60 adults participated in the study (24 dyslexic, 36 typical readers). Feature-based processing and configural or global form processing of faces was measured with a face matching task. Opposite laterality effects in these tasks, dependent on left–right orientation of faces, supported that they tapped into separable visual mechanisms. Dyslexic readers tended to be poorer than typical readers at feature-based face matching while no differences were found for global form face matching. We conclude that word and face perception are associated when the latter requires the processing of visual features of a face, while processing the global form of faces apparently shares minimal—if any—resources with visual word processing. The current results indicate that visual word and face processing are both associated and dissociated—but this depends on what visual mechanisms are task-relevant. We suggest that reading deficits could stem from multiple factors, and that one such factor is a problem with feature-based processing of visual objects.

Highlights

  • Faces and words are traditionally assumed to be independently processed

  • Twenty-three people screened positive for dyslexia (ARHQ score of 0.43 or higher), thereof 10 out of 11 people who reported a previous diagnosis of dyslexia

  • The current study indicates that dyslexic readers tend to be worse at feature- or part-based processing of faces compared to typical readers, while no group differences were found in global or configural processing of faces

Read more

Summary

Introduction

Faces and words are traditionally assumed to be independently processed. Dyslexia is traditionally thought to be a non-visual deficit. When compared pixel-by-pixel, such images might have less in common than images of two completely different objects, such as two people seen from the same viewpoint or two words such as CAT vs OAT (compare to cat and oat) These challenges are often collectively grouped under the term high-level vision and are generally thought to be solved by later stages of the ventral visual s­ tream[1]. Behrmann and P­ laut[7,29] offer an alternative to the traditional view that higher levels of the ventral visual stream consist of independent domain-specific regions dedicated to the processing of particular categories They acknowledge that faces and words have the strongest claim of all object classes to domain-specificity, with the potential for distinct cortical regions specialized for their highlevel visuoperceptual analysis. Robotham and ­Starrfelt[21] provide convincing evidence that face and word recognition abilities can be selectively affected

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call