Abstract
Currently, there are two qualitatively different model classes in the field of spoken language understanding. Autonomous models allow only bottom-up information flow, whereas interactive models allow higher level representations (e.g., lexical) to affect processing at lower levels (e.g., phonemic). Part 1 of the present study included a test of a prediction that differentiates the two model classes: Is phoneme monitoring faster for targets in real words than in pseudowords, even before the word could in principle be recognized? The results indicate that this lexical advantage does occur, in accord with the predictions of interactive models. In Part 2, speech compression and expansion were used to assess the sufficiency or necessity of bottom-up evidence and of processing time in accomplishing lexical access. The results of Parts 1 and 2 suggested that in addition to the lexical effects posited by current models, sublexical activation may also play an important role. Data are presented in Part 3 that support this interpretation Collectively, the results in the current study support interactive models of lexical processing, but require additional sublexical processes as well.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.