Abstract

The general foundations for a theory of human sentence processing and the basic goals and questions of the 1960s are argued to be solid and still very relevant today, even if the particular processing model of the time (the clausal model) does not hold up. Today there is considerable agreement that sentence processing is both fast and grammatically controlled from the outset, and that minimal structure and recent attachments are favored across a diverse range of languages. Parsing is not purely bottom-up but instead allows phrases to be postulated before all daughters have been parsed; it works similarly for head-initial and head-final languages. The parser takes as input a rich prosodic/intonational representation which influences processing in ways that extend far beyond use of intonational boundaries as local (juncture) cues. It is argued that psycholinguistic evidence disconfirms the use of prepackaged fully articulated X′ templates for identifying phrase structure, instead supporting the extended projections of Grimshaw (1991). Further, considerable evidence shows that the processor respects grammatical distinctions among types of dependencies, as indicated by differential processing effects for antecedent-government chains versus binding relations, by the processing difficulty of composite dependencies involving more than one grammatical type of dependency, and by distinctions between deep and surface anaphora. The central problem for future theories of sentence processing is claimed to be the development of theories of sentence interpretation. Various possible approaches are discussed, including parallel computation of alternatives with subsequent selection, a task-driven interpretive system, and underspecification. The question is raised whether sentence-level psycholinguistics is in the middle of a paradigm shift from models with symbolic representations and an intrinsically serial (Von Neumann) architecture to constraint-satisfaction/connectionist models with inherently parallel architectures. Reasons are offered for rejecting current constraint-satisfaction models, including lack of supporting evidence discriminating between constraint-satisfaction and “conventional” models, serious problems with the use of prepackaged X′ templates for identifying phrase structure (the only proposal to date for handling syntactic structures in constraint-based models) and an inability to deal with cross-language generalizations. Pure connectionist models are unlikely to fare better in the future for two reasons. They are apparently unable to handle restricted universal quantification (Marcus, 1998)which clearly lies within the capacity of the human sentence processor. Further, competition between alternatives lies at the very heart of such models. Maximal competition should occur in processing fully ambiguous sentences and thus they are predicted to take longer to process than biased or temporarily ambiguous ones (Frazier & Clifton, 1997). But the empirical evidence suggests that fully ambiguous sentences are not systematically harder to process than their disambiguated counterparts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call