Abstract

Immediate Integration of Syntactic and Referential Constraints on Spoken Word Recognition James S. Magnuson (magnuson@psych.columbia.edu) Department of Psychology, Columbia University 1190 Amsterdam Ave., MC 5501 New York, NY 10027 USA Michael K. Tanenhaus (mtan@bcs.rochester.edu) and Richard N. Aslin (aslin@cvs.rochester.edu) Department of Brain & Cognitive Sciences, University of Rochester Rochester, NY 14627 USA Abstract We tested the hypothesis that syntactic constraints on spoken word recognition are integrated immediately when they are highly predictive. We used an artificial lexicon paradigm to create a lexicon of nouns (referring to shapes) and adjectives (referring to textures). Each word had phonological competitors in both form classes. We created strong form class expectations by using visual displays that either required adjective use or made adjectives infelicitous. We found evidence for immediate integration of form class expectations based on the pragmatic visual cues: similar-sounding words competed when they were from the same form class, but not when they were from different form classes. Top-down constraints on word recognition It is clear that we integrate top-down information when we interpret language. If someone tells us they put money in a bank, we understand that their money is in a vault and not buried next to a river. What is less clear is when and how we integrate top-down knowledge with bottom-up linguistic input. One possibility is that language is processed in stages, with top-down information integrated after an encapsulated first-pass on the bottom-up input (e.g., Frazier & Clifton, 1996; Norris, McQueen & Cutler, 2000). The theory behind this genre of model is that optimal efficiency can be achieved by applying automatic processes that will almost always yield a correct result. In the rare event that the automatic result cannot be reconciled with top-down information, reanalysis would be required. A second possibility is that top-down constraints are integrated immediately, with weights proportional to their predictive power (e.g., McClelland & Elman, 1986; MacDonald, Pearlmutter & Seidenberg, 1994; Tanenhaus & Trueswell, 1994). The theory behind constraint-based approaches is that a system can be made more efficient by allowing any sufficiently predictive information source to be integrated with processing as soon as it is relevant. While a variety of results support constraint-based theories of sentence processing (see MacDonald et al., 1994), there is reason to believe that spoken word recognition is initially encapsulated from top-down constraints. Swinney (1979) and Tanenhaus, Leiman & Seidenberg (1979) provided the seminal results on this issue by examining whether all homophones are activated independent of context. Tanenhaus et al. presented participants with spoken sentences that ended with a syntactically ambiguous word (e.g., “they all rose” vs. “they bought a rose”). If participants were asked to name a visual target immediately at the offset of the ambiguous word, priming was found for associates both of the alternative suggested by the context (e.g., “stood” given “they all rose”) and of homophones that would not fit the syntactic frame (e.g., “flower”). Given a 200-ms delay prior to the presentation of the visual stimulus, priming was found only for associates of the syntactically appropriate word. This suggests that lexical activation is initially based only on bottom-up information, and top-down information is a relatively late-acting constraint. Tanenhaus & Lucas (1987) argued that this made sense given the predictive power of a form-class expectation. Knowing that the next word will be one of tens of thousands of nouns would afford virtually no advantage for most nouns (those without homophones in different form classes). Furthermore, expectations for classes like noun or verb might be very weak because modifiers can almost always be inserted before either class (e.g., “they just rose”, “they bought a very pretty red rose”; cf. Shillcock & Bard, 1993). Shillcock & Bard (1993) pointed out that there are form classes that should be more predictive than noun or verb, because they have few members: those made up of closed-class words. They examined whether /wUd/ in a sentence context favoring the closed-class item, “would” (e.g., “John said that he didn’t want to do the job, but his brother would, as we later found out”), would prime associates of its homophone, “wood”, such as “timber” (compared with a context like “John said he didn’t want to do the job with his brother’s

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call