Abstract

Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30–45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14–16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70–80%) showed better performance (by mean 4–6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical “critical periods” of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.

Highlights

  • Understanding speech in background noise is challenging

  • We showed that the participants significantly improved from Session 1 to Session 2 in all three conditions : in the Audio Only (A) condition, from mean 22.96 ± 10 dB to mean 6.47 ± 6.9 dB [t(39) = 11.72, p < 0.001, effect size = 1.851, power = 1], in the Audio Tactile matching (ATm) condition from mean 16.8 ± 9.15 dB to mean 2.09 ± 6 dB [t(39) = 11.9, p < 0.001, effect size = 1.82, power = 1], in the Audio Tactile nonMatching (ATnm) condition from mean 16.67 ± 9.3 dB to mean 10.16 ± 8.7 dB, [t(39) = 5.46, p < 0.001, effect size = 0.86, power = 0.99]

  • In the current experiment we replicated in a larger group of 40 participants our previous proof-of-concept study where we showed enhanced speech-in-noise understanding when auditory speech signals were complemented with matching low-frequency tactile vibrations on ­fingertips[31]

Read more

Summary

Introduction

Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. The mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations Both before and after training most of the participants (70–80%) showed better performance (by mean 4–6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. Many users of modern hearing aids and cochlear implants complain that their devices fail to effectively compensate for their hearing loss when they are exposed to ambiguous acoustic ­situations[15–17] All this indicates the importance of developing novel training methods and devices that can be employed to improve communication. Multisensory speech training regimes that complement audition with vision have been found s­ uccessful[10,25–27], including in rehabilitation of patients with hearing aids (HA) and/or cochlear implants (CI), by adding speech reading, gestures or sign language c­ ues[1,6,28]

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.