Abstract

Listeners can use vocal features of speech to help segregate a target talker from a field of different- voiced speech maskers. However, recent research also suggests that acoustic features (such as those responsible for identity) are stored with speech’s lexical content in episodic memory and can be beneficial in some non-overlapping speech perception tasks as well (e.g., Goldinger, 1996). This paired-voice benefit may have implications for speech displays and dialog systems since purposeful selection of the speaker’s voice is possible, unlike in most live speech communication tasks. In the current experiments, we investigated if manipulating voice identity could improve performance in three complex listening situations relevant to speech displays: extraction of information from background speech, listening while simultaneously speaking, and keeping track of multiple agents’ states. Results indicate that the benefits of individualized voices seen in the episodic memory literature do not translate to the current, more complex, speech tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call