Abstract

Multiple cues influence listeners’ segmentation of connected speech into words, but most previous studies have used stimuli elicited in careful readings rather than natural conversation. Discerning word boundaries in conversational speech may differ from the laboratory setting. In particular, a speaker’s articulatory effort – hyperarticulation vs. hypoarticulation (H&H) – may vary according to communicative demands, suggesting a compensatory relationship whereby acoustic-phonetic cues are attenuated when other information sources strongly guide segmentation. We examined how listeners’ interpretation of segmentation cues is affected by speech style (spontaneous conversation vs. read), using cross-modal identity priming. To elicit spontaneous stimuli, we used a map task in which speakers discussed routes around stylized landmarks. These landmarks were two-word phrases in which the strength of potential segmentation cues – semantic likelihood and cross-boundary diphone phonotactics – was systematically varied. Landmark-carrying utterances were transcribed and later re-recorded as read speech. Independent of speech style, we found an interaction between cue valence (favorable/unfavorable) and cue type (phonotactics/semantics). Thus, there was an effect of semantic plausibility, but no effect of cross-boundary phonotactics, indicating that the importance of phonotactic segmentation may have been overstated in studies where lexical information was artificially suppressed. These patterns were unaffected by whether the stimuli were elicited in a spontaneous or read context, even though the difference in speech styles was evident in a main effect. Durational analyses suggested speaker-driven cue trade-offs congruent with an H&H account, but these modulations did not impact on listener behavior. We conclude that previous research exploiting read speech is reliable in indicating the primacy of lexically based cues in the segmentation of natural conversational speech.

Highlights

  • Most studies of speech perception in general, and speech segmentation in particular, have used stimuli elicited in careful readings rather than natural communicative conditions

  • We excluded participants whose mean latencies in the experimental trials were more than two standard deviations greater than the overall participant mean within their own condition

  • There was no effect of Valence in the phonotactics condition, χ2 (1) = 0.72, p > 0.10, but there was a main effect of Style, χ2 (1) = 4.18, p < 0.05 (Figure 3)

Read more

Summary

Introduction

Most studies of speech perception in general, and speech segmentation in particular, have used stimuli elicited in careful readings rather than natural communicative conditions. Ensuring the ecological validity of mechanisms established with read stimuli requires corroborative data (for an early example, see Mehta and Cutler, 1988). Words in conversational speech tend to be less intelligible than citation forms (e.g., Pickett and Pollack, 1963), with a narrower formant frequency space for vowels, higher rates of vowel reduction and elision, and greater coarticulation and allophonic variation (e.g., Klatt and Stevens, 1973; Brown, 1977; Duez, 1995). Conversational speech tends to be highly contextualized, with the production and interpretation of utterances potentially dependent on a mutual understanding of the foregoing interaction. Within the phonetic domain, the speaker’s degree of articulatory effort – hyperarticulation vs

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.