Abstract
The representation used to recognize spoken words was investigated using natural CVCC and CCVC word and nonword stimuli in a primed lexical decision task. In this task, subjects decided whether the second item in a prime-target pair was a word or nonword, as quickly and accurately as possible. Five prime-target relations were devised: identity (/pats/–/pats/), break cluster (spat/–/pats/), change vowel (/p ts/–/pats/), change both (/st p/–/pats/), and control (/grin/–/pats/). The pattern of mean RTs and accuracy across these five relation types provides insight into issues such as the nature of the representation underlying spoken words (possibilities include the abstract phoneme, position-specific phoneme, triphone, and syllable) and the cohesiveness of consonantal clusters (will a cluster act as a cohesive unit or as separate phonemes?). Furthermore, enough data were collected to partition the RTs into fast, medium, and slow ranges. It was reasoned that the pattern of results in the slow range would reflect a post-lexical representation while the results from the fast range would reflect the prelexical level of representation that is used in recognizing spoken words. [Work supported by NIDCD Grant No. DC00219 to SUNY at Buffalo.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.