Abstract

You have accessThe ASHA LeaderFeature1 Mar 2005Aural Habilitation Update: The Role of Speech Production Skills of Infants and Children With Hearing Loss Sheila R. Pratt Sheila R. Pratt Google Scholar More articles by this author https://doi.org/10.1044/leader.FTR2.10042005.8 SectionsAbout ToolsAdd to favorites ShareFacebookTwitterLinked In It is well known that the development of speech is extremely limited without adequate auditory input and feedback. An obvious example is that hearing loss in infancy and early childhood usually affects all as pects of speech production unless there is early and consistent use of sensory aids as well as substantive sensorimotor and linguistic training. The speech development of infants and children with hearing loss hinges on their abilities to use audition not only to learn the sounds of their language, but also to use their articulators to produce those sounds and make use of auditory feedback to refine their speech over time. As such, the speech of children with prelingual hearing loss is particularly susceptible to delay and disorder, es pecially if the severity of the hearing loss is substantial and intervention is delayed or inadequate. Speech Development During the first six months of life (and possibly in utero) auditory perceptual learning is vital for acquiring oral language and speech, although the maturation timeline for the speech production in normal-hearing children is relatively lengthy. This protracted timeline may account for the long-term training and treatment needs of many children with hearing loss, even those identified and fitted early with sensory aids (Yoshinaga-Itano & Sedey, 2000). Young children with normal hearing typically begin babbling around 5–6 months of age and start verbal expression around 12 months of age. However, their speech production skills continue to be refined through the school-age years and well beyond when their basic phonological inventories have been established. For example, vowel space, voice-onset times, and vocal control adjust throughout early childhood (Assmann & Katz, 2000; Koenig, 2001; Lee, Pontamianos, & Naray anan, 1999). Furthermore, substantial acoustic variability is a hallmark of children’s speech production until late childhood. Although the research is somewhat mixed on the development of coarticulation, children appear to be less able than adults to coarticulate their speech gestures in a consistent manner, and as a consequence, their speech is less intelligible than that of adults (Katz, Kripke, & Tallal, 1991; Nittrouer, 1993). The refinement of auditory processing of speech has a similar developmental timeline. Child ren may apply different rules or weights to speech cues than adults, and these weights change throughout childhood (Nittrouer, 2003; Nit trouer, Crowther, & Miller, 1998). Their auditory processing of speech also appears to be more susceptible to acoustic and linguistic perturbations than is observed with adults. Children are more adversely affected than adults by background noise, reverberation, talker variability, re ductions in signal bandwidth, and the number of signal channels (Eisenberg et al., 2000; Ryalls & Pisoni, 1997; Kortekaas & Stelmachowicz, 2000). The Role of Audition in Speech Development and Production For mature speakers, audition acts as an error detector and a means of monitoring speaking conditions. It is considered to be slower than other forms of sensory information (i.e., proprioception) generated during speech, and therefore is likely limited to a feedback role (Perkell et al., 1997). Speakers use audition to determine if their articulators have produced sounds that are acoustically off-target. Audition also provides information for corrective adjustments, and as a consequence, is a contributor to the maintenance of speech integrity. Studies of frequency and spectrally shifted speech feedback have shown that adults rapidly adjust to minor acoustic perturbations with compensatory and/or matching strategies (Bauer & Larson, 2003; Houde & Jordan, 2002; Jones & Munhall, 2002, 2003). They appear to adjust their articulators so that their speech productions match their internal representations. In addition to acting as an error detector, hearing is used by mature speakers to determine how they should adjust their speech in various acoustic, linguistic, and social environments. For example, adults know when to speak slower, louder, softer, or more precisely in order to accommodate their listener or the environmental conditions (Perkell et al., 1997). In contrast, many young children are unable to adjust the clarity of their speech, even when explicitly directed to do so (Ide-Helvie et al., 2004). Audition also allows the development of articulatory organization by providing information about how to position, move, and coordinate the articulators for speech, movements that can differ from those associated with vegetative functions of the mechanisms (Moore & Ruark, 1996). For ex ample, infants use audition to learn how to shift from a vegetative breathing pattern to a pattern that can support speech. They learn how to position and move their tongues and to judge the acoustic consequences of those gestures. Coord ination of the larynx with the vocal tract and upper airway articulators is refined over years but requires an intact auditory system (Koenig, 2001; Tye-Murray, 1992). The lip and jaw movements associated with speech in infants and young children are highly variable but distinct from sucking, chewing, and smiling (Green et al., 2000; Green, Moore, & Reilly, 2002; Moore & Ruark, 1996). The implication is that although the same peripheral mechanisms are used across oral and respiratory functions, the differing goals require substantially distinct coordination and feedback efforts. The coordination needed to chew and swallow efficiently develops over early childhood but is largely independent of hearing, whereas the coordination required to move between vowel and consonant gestures, particularly in a coordinated and coarticulated manner, is strongly influenced by hearing (Baum & Waldstein, 1991; Guenther, 1995; Tye-Murray, 1992; Waldstein & Baum, 1991). Audition has a primary sensorimotor role in the development of speech, but it also is fundamental to infants and young children learning the sounds of their language. Furthermore, it helps them learn how specific speech events relate to their phonology, so that with development, young children become more able to use their hearing to inform them about the sequencing of speech gestures and the correctness of subsequent productions. Over time children learn to use audition to monitor ongoing speech, detect errors, and make corrective adjustments. Hearing Loss and Speech Production Hearing loss is common in the general population but its effects on speech production are most pronounced with individuals whose hearing loss is congenital or acquired in early childhood. Most adults who acquire their hearing losses later in life suffer little or no deterioration in intelligibility, likely because their residual hearing provides sufficient feedback since their mature speech production systems rely more on orosensory than auditory information to maintain proper control (Guenther, 1995; Goehl & Kaufman, 1984; Perkell et al., 1997). The speech differences that they do exhibit are subtle and usually imperceptible, even in cases of complete or nearly complete adventitious hearing loss. Nonetheless, some adventitiously deafened adults exhibit reduced speaking rate, and compromised articulatory and phonatory precision (Kishon-Rabin et al., 1999; Lane & Webster, 1991; Lane et al., 1995; Leder et al., 1987; Waldstein, 1990; Perkell et al., 1992). These speech differences are similar in nature, but not in severity, to those observed with prelingually deafened speakers. Most infants and young children with hearing loss demonstrate disordered phonation and articulation, as well as delays in the acquisition of sound categories. The entire speech production system can be affected, from respiratory support to the coarticulation of ongoing speech (Pratt & Tye-Murray, 1997). This is especially true if the hearing loss is identified late or after a period of protracted hearing loss. Furthermore, the overlap and interaction of disordered sound production and linguistic delay contribute to poor speech integrity and restricted speech development. Babbling generally does not appear before 12 months of age (Oller & Eilers, 1988; Oller et al., 1985) and canonical babbling has been observed as late as 31 months in this population (Lynch, Oller, & Steffens, 1989). Infants also produce fewer instances of canonical babble and include a more limited range of consonants in their babble (Stoel-Gammon, 1988; Stoel-Gammon & Otomo, 1986; Wallace, Menn, & Yoshinaga-Itano, 2000). However, later speech intelligibility is better predicted by the consonant inventory used in emerging spoken language during the second year of life than during babble (Obenchain, Menn, & Yoshinaga-Itano, 2000). The phonetic repertoires of infants with severe-to-profound hearing loss often are restricted when compared to their normal-hearing peers, although there is abundant individual variability (Lach, Ling, Ling & Ship, 1970; Stoel-Gammon & Otomo, 1986; Wallace et al., 2000; Yoshinaga-Itano & Sedey, 2000). The early speech inventories of infants with severe-to-profound hearing loss predominately consist of motorically easy sounds such as vowels and bilabial consonants. The sounds of their inventories also contain more low frequency information, which is more audible. For example, the babbling of infants with hearing loss often has a high concentration of nasals and glides, which include low-frequency continuant cues (Stoel-Gammon & Otomo, 1986). Without early intervention and appropriate fitting of sensory aids the speech-sound inventories of many children with hearing loss usually do not attain full maturity. Yoshinaga-Itano and Sedey (2000) found that children with moderate-to-severe hearing losses did not reach an age-appropriate complement of vowel and consonant sounds until about 4 and 5 years respectively, and many children with profound hearing loss had restricted inventories even at 5 years of age. Children with profound hearing loss often reach an early plateau in their speech skill development. For instance, the speech characteristics of many children with severe-to-profound hearing loss demonstrate little improvement in sound inventory and intelligibility after 8 years of age, even with the initiation of extensive training (Hudgins & Number, 1942, McGarr, 1987; Smith, 1975). Such results imply that, like auditory and language interventions, speech production therapy should be an important component of early intervention, and that the common practice of delaying speech training in children with hearing loss until they have functional language is developmentally untenable if the goal is for them to be oral communicators. In addition to the relationship between age-of-onset and speech impairment severity, there also is a moderately positive relationship between the severity of hearing loss and the extent of the associated speech difficulties (Boothroyd, 1969; Levitt, 1987; Smith, 1975). For example, children with mild-to-moderate hearing loss, particularly if well aided, tend to exhibit speech differences that are mild (Elfenbein, Hardin-Jones, & Davis, 1994; Oller & Kelly, 1974; West & Weber, 1973). Elfenbein and colleagues found that children with mild-to-moderate hearing loss exhibit good intelligibility but had higher than normal rates of affricate and fricative substitutions. Mild hoarseness and resonance problems also are present in 20% to 30% of this group of children. Moreover, they tend to have increased rates of voicing irregularities, difficulties with /r/ production, and omissions of back and word-final consonants. Early studies of children with profound prelingual hearing loss showed that most rarely acquired speech skills sufficient to interact easily using spoken language. On average, less than 20% of their words were intelligible to listeners who were not familiar with their speech (Hidgins & Numbers 1942; Markides, 1970; Smith, 1975). Smith (1975) evaluated 40 children with varying levels of hearing loss and, on average, only 18.7% (0% to 76%) of their words could be identified by inexperienced listeners. As expected, overall intelligibility was inversely related to the frequency of segmental and suprasegmental errors. However, with early identification of hearing loss and early intervention (i.e., fitting of sensory devices, behavioral training, and parent counseling), the numbers of children with severe-to-profound hearing loss and intelligible speech has increased (Uchanski & Geers, 2003). Many more children are developing sufficient speech perception to support development of speech production and oral language, but these advances may have added to the overall heterogeneity of the population (Higgins et al., 2003). Other factors contribute to the diversity of speech production skills observed with these children. For instance, cognitive skill (particularly nonverbal intelligence) has been found to be an important predictor of functional speech and oral language in children with hearing loss (Geers et al., 2002; Tobey et al., 2003). Auditory experience in infancy and early childhood, even of limited duration, positively influences the speech production skills of children who have severe-to-profound hearing loss (Geers, 2004). The use of sensory aids has a substantial impact on speech outcomes, but somewhat surprisingly, the age at which infants and young children are fitted with cochlear implants has not surfaced in studies of speech production as a significant predictor of later speech intelligibility (Geers et al., 2002; Tobey et al., 2003). Early implantation (less than 2 years) is, however, related to more normal oral communication development as a whole (both speech and oral language) (Geers, 2004). It may be that the age of implantation is not easily separated from other influences of intervention, like the orientation of the habilitation program and parent involvement, which relate strongly to children being auditory perceptual learners and users of auditory feedback. Another consideration is that many early-implanted children may be implanted too late to observe a clear impact on speech production. The critical ages at which hearing aids should be fitted has not been investigated, but like cochlear implants, it is assumed that earlier is better. The oromotor integrity and language skills are additional factors that often are neglected in studies of speech development in children with hearing loss. A substantial number of infants and children with hearing loss present with secondary handicapping conditions, such as neurological disorders. When these neurological disorders include the speech mechanism, the development of functional speech is difficult even if audition is optimized. As such, is it not unusual for a child with hearing loss to have a coexisting dysarthria along with the speech impairment secondary to the hearing loss. A subset of children with hearing loss also may have an apraxia of speech, but separating the impact of hearing loss from an apraxia of speech is difficult because the associated speech characteristics overlap (McNeil, Robin & Schmidt, 1997). Language disorders also are commonly observed in children with hearing loss, and are frequently evidenced in phonological disorder and lexical delay. As a result, extricating the sensorimotor impact of hearing loss on speech production from the influences of language disorder in individual children is not always straightforward (Peng et al., 2004). Habilitation: Sensory Aids and Treatment Most speech training approaches are dependent on optimizing the use of residual hearing although some approaches use other modalities (Pratt, Heintzelman, & Deming, 1993; Pratt & Tye-Murray, 1997). Correspondingly, it is generally believed that speech is learned most easily if infants and children learn and monitor their speech through their auditory systems. Therefore, the proper and early fitting, and consistent use of sensory aids, along with auditory and language training are important components of speech production training. In support of this auditory-based approach is the relationship between the severity of prelingual hearing loss and the extent of speech delay/disorder found in children (Boothroyd, 1969; Levitt, 1987; Smith, 1975), as well as any history of previous hearing (Geers, 2004). The relationship between audiometric configuration and speech intelligibility also argues for the importance of audition if the goal for a child is oral communication (Levitt, 1987; Osberger, Maso, & Sam, 1993). There is a growing literature supporting the positive impact of cochlear implants on speech development, as well as the role that auditory-oral-based training programs play in communication outcomes of children fitted with cochlear implants (Geers et al., 2002; Tobey et al., 2003). There is, however, limited efficacy data for children with less severe hearing loss who are typically fitted with hearing aids. The lack of research in this area is glaring because wearable electroacoustic hearing aids have been available for more than 50 years (Lybarger, 1988) and are a fundamental component of treatment approaches for most children with hearing loss. Furthermore, more infants and children are fitted with hearing aids than cochlear implants. Preliminary data reported by Stemachowicz and her colleagues (2004) on three infants fitted early with hearing aids suggested delays in sound category acquisition consistent with patterns previously reported in the literature. Sound inventories were impoverished, consonants were more affected than vowels, and sound containing high-frequency cues were particularly limited. Additional data by Pittman and colleagues (2003) observed that the amplitude of high-frequency speech cues directed to and produced by children wearing hearing aids may not be sufficient, although they did not connect their results directly to speech production outcomes. Pratt, Grayhack, Palmer, and Sabo (2003) found that differences in hearing aid configuration could alter vowel spacing of children even though the children in their study had intelligible speech, and the speech tokens measured were limited to acceptable productions. Their data indicated that hearing aids could alter the speech of children, but provided little information about the impact that hearing aids may have on speech development. Given the paucity of data-as well as the expansion of universal infant hearing screening programs-it is critical that more research be done in this area. Increasing numbers of infants with hearing loss will be identified shortly after birth and, if we are to effectively treat them, more should be known about the impact that hearing aids and other sensory aids have on speech and auditory system development. Aural Habilitation References Assmann P. F., & Katz W. F. (2000). Time-varying spectral change in the vowels of children and adults.Journal of the Acoustic Society of America, 108, 1856–1866. CrossrefGoogle Scholar Baum S., & Waldstein R. (1991). Perseveratory coarticulation in the speech of profoundly hearing-impaired and normally hearing children.Journal of Speech and Hearing Research, 34, 1286–1292. LinkGoogle Scholar Bauer J. J., & Larson C. R. (2003). Audio-vocal responses to repetitive pitch-shift stimulation during a sustained vocalization: Improvements in methodology for the pitch-shifting technique.Journal of the Acoustical Society of America, 114, 1048–1054. CrossrefGoogle Scholar Boothroyd A. (1969). Distribution of hearing levels in the student population of the Clarke School for the Deaf. Northampton, MA: Clarke School for the Deaf. Google Scholar Elfenbein J., Hardin-Jones M., & Davis J. (1994). Oral communication skills of children who are hard of hearing.Journal of Speech and Hearing Research, 37, 216–226. LinkGoogle Scholar Eisenberg L., Shannon R., Martinez A. S., & Wygonski J. (2000). Speech recognition with reduced spectral cues as a function of age.Journal of the Acoustical Society of America, 107, 2704–2710. CrossrefGoogle Scholar Geers A., Brenner C., Nicholas J., Uchanski R., Tye-Murray N., & Tobey E. (2002). Rehabilitation factors contributing to implant benefit in children.Annals of Otology, Rhinology, and Laryngology—Supplement, 189, 127–130. CrossrefGoogle Scholar Goehl H., & Kaufman D. (1984). Do the effects of adventitious deafness include disordered speech?.Journal of Speech and Hearing Disorders, 49, 58–64. LinkGoogle Scholar Green J. R., Moore C. A., Higashikawa M., & Steeve R. W. (2000). The physiologic development of speech motor control: Lip and jaw coordination.Journal of Speech, Language, & Hearing Research, 43, 239–255. LinkGoogle Scholar Green J. R., Moore C. A., & Reilly K. J. (2002). The sequential development of jaw and lip control for speech.Journal of Speech, Language, and Hearing Research, 45, 66–79. LinkGoogle Scholar Guenther F. H. (1995). Speech sound acquisition, coarticulation, and rate effects in a neural network model of speech production.Psychological Review, 1002, 594–621. CrossrefGoogle Scholar Higgins M. B., McCleary E. A., Carney A. E., & Schulte L. (2003). Longitudinal changes in children’s speech and voice physiology after cochlear implantation.Ear and Hearing, 24, 48–70. CrossrefGoogle Scholar Houde J. F., & Jordan M. I. (45, 295–310. Sensorimotor adaptation of speech I: Compensation and adaptation.Journal of Speech, Language, and Hearing Research, 45, 295–310. LinkGoogle Scholar Hudgins C., & Numbers F. (1942). An investigation of the intelligibility of speech of the deaf.Genetic Psychology Monographs, 25, 289–392. Google Scholar Ide-Helvie D. L., McClearly W. A., Sullivan S. C., Lotto A. J., & Higgins M. B. (2004). Strategies used to increase speech clarity by normal-hearing children.Journal of the Acoustical Society of America, 16, 2522. CrossrefGoogle Scholar Jones J. A., & Munhall K. G. (2002). The role of auditory feedback during phonation: Studies of Mandarin tone production.Journal of Phonetics, 30, 303–320. CrossrefGoogle Scholar Jones J. A., & Munhall K. G. (2003). Learning to produce speech with an altered vocal tract: The role of auditory feedback.Journal of the Acoustical Society of America, 113, 532–543. CrossrefGoogle Scholar Katz W. F., Kripke C., & Tallal P. (1991). Anticipatory coarticulation in the speech of adults and young children: Acoustic, perceptual and video data.Journal of Speech and Hearing Research, 34, 1222–1232. LinkGoogle Scholar Kishon-Rabin L., Taitelbaum R., Tobin Y., & Hildesheimer M. (1999). The effect of partially restored hearing on speech production of postlingually deafened adults with mulitchannel cochlear implants.Journal of the Acoustical Society of America, 106, 2843–2857. CrossrefGoogle Scholar Koenig L. L. (2001). Distributional characteristics of VOT in children’s voiceless aspirated stops and interpretation of developmental trends.Journal of Speech and Hearing Research, 44, 1058–1068. LinkGoogle Scholar Kortekaas R., & Stelmachowicz P. (2000). Bandwidth effects on children’s perception of the inflectional morpheme /s/: Acoustical measurements, auditory detection, and clarity rating.Journal of Speech, Language, and Hearing Research, 43, 645–660. LinkGoogle Scholar Lach R., Ling D., Ling L., & Ship N. (1970). Early speech development in deaf infants.American Annals of the Deaf, 115, 522–526. Google Scholar Lane H., & Webster J. W. (1991). Speech deterioration in postlingually deafened adults.Journal of the Acoustical Society of America, 89, 859–866. CrossrefGoogle Scholar Lane H., Wozniak J., Matthies M., Svirsky M., & Perkell J. (1995). Phonemic resetting versus postural adjustments in the speech of cochlear implant users: An exploration of voice-onset-time.Journal of the Acoustical Society of America, 98, 3096–3106. CrossrefGoogle Scholar Leder S., Spitzer J., Kirchner J. C., Flevaris-Phillips C., Milner P., & Richardson F. (1987). Speaking rate of adventitiously deaf male cochlear implant candidates.Journal of the Acoustical Society of America. 82, 843–846. CrossrefGoogle Scholar Lee S., Potamianos A., & Narayanan S. (1999). Acoustic of children’s speech: Developmental changes of temporal and spectral parameters.Journal of the Acoustical Society of America, 105, 1455–1468. CrossrefGoogle Scholar Levitt H. (1987). Interrelationships among the speech and language measures.In Levitt H., McGarr N., & Geffner D. (Eds.), Development of language and communication skills of hearing-impaired children. ASHA Monographs, 26, 123–139. Google Scholar Lybarger S. (1988). A historical overview.In Sandlin R. (Ed.), Handbook of hearing aid amplification, Volume I (pp. 1–30). Boston, MA: College-Hill Press. Google Scholar Lynch M., Oller K., & Steffens M. (1989). Development of speech-like vocalizations in a child with congenital absence of cochleas: The case of total deafness.Applied Psycholinguistics, 10, 315–333. CrossrefGoogle Scholar McGarr N. (1987). Communication skills of hearing-impaired children in schools for the deaf.In Levitt H., McGarr N., & Geffner D. (Eds.), Development of language & communication in hearing impaired children. ASHA Monographs, 26, 91–107. Google Scholar McNeil M. R., Robin D. A., & Schmidt R. A. (1997). Apraxia of Speech: Definition, Differentiation, and Treatment.In McNeil M. (Ed.) Clinical management of sensorimotor speech disorders (pp 311–344). New York, NY: Thieme Medical Publishers, Inc. Google Scholar Markides A. (1970). The speech of deaf and partially hearing children with special reference to factors affecting intelligibility.British Journal of Disorders of Communication, 5, 126–140. CrossrefGoogle Scholar Moore C. A., & Ruark J. L. (1996). Does speech emerge from earlier appearing oral motor behaviors?.Journal of Speech and Hearing Research, 39, 1034–1047. LinkGoogle Scholar Nittrouer S. (2002). Learning to perceive speech: How fricative perception changes, and how it stays the same.Journal of the Acoustical Society of America, 112, 711–719. CrossrefGoogle Scholar Nittrouer S., Crowther C. S., & Miller M. E. (1998). The relative weighting of acoustic properties in the perception of [s]+stop clusters by children and adults.Perception & Psychophysics, 60, 51–64. CrossrefGoogle Scholar Obenchain P., Menn L., & Yoshinaga-Itano C. (2000). Can speech development at 36 months in children with hearing loss be predicted from information available in the second year of life?.Volta Review, 100(5), 149–180. Google Scholar Oller D., Eilers R., Bull D., & Carney A. (1985). Pre-speech vocalizations of a deaf infant: A comparison with normal metaphonological development.Journal of Speech and Hearing Research, 28, 47–63. LinkGoogle Scholar Oller D., & Kelly C. (1974). Phonological substitution processes of a hard-of-hearing child.Journal of Speech and Hearing Disorders, 39, 65–74. LinkGoogle Scholar Osberger M. J., Maso M., & Sam L. (1993). Speech intelligibility of children with cochlear implants, tactile aids, or hearing aids.Journal of Speech and Hearing Research, 36, 186–203. LinkGoogle Scholar Peng S., Weis A. L., Cheung H., & Lin Y. (2004). Consonant production and language skills in mandarin-speaking children with cochlear implants.Archives of Otolaryngology, Head and Neck Surgery, 130, 592–597. CrossrefGoogle Scholar Perkell J., Lane H., Svirsky M., & Webster J. (1992). Speech of cochlear implant patients: A longitudinal study of vowel production.Journal of the Acoustical Society of America, 91, 2961–2978. CrossrefGoogle Scholar Perkell J. S., Matthies M. L., Lane H., Guenther F. H., Wilhelms-Tricarico R., Wozniak J., & Guiod P. (1997). Speech motor control: Acoustic goals, saturation effects, auditory feedback and internal models.Speech Communication, 22, 227–250. CrossrefGoogle Scholar Pittman A. L., Stemachowicz P. G., Lewis D. E., & Hoover B. M. (2003). Spectral characteristics of speech at the ear: Implications for amplification in children.Journal of Speech, Language, and Hearing Research, 46, 649–657. LinkGoogle Scholar Pratt S. R., Heintzelman A. T., & Deming S. E. (1993). The efficacy of using the IBM SpeechViewer Vowel Accuracy Module to treat young children with hearing impairment.Journal of Speech and Hearing Research, 36, 1063–1074. LinkGoogle Scholar Pratt S. R., & Tye-Murray N. A. (1997). Speech impairment secondary to hearing impairment.In McNeil M. (Ed.), Clinical management of sensorimotor speech disorders (pp. 345–388). New York: Thieme Medical Publishers, Inc. Google Scholar Smith C. (1975). Residual hearing and speech production in the deaf.Journal of Speech and Hearing Research, 19, 795–811. LinkGoogle Scholar Stelmachowicz P. G., Pittman A. L., Hoover B. M., Lewis D. E., & Moeller M. P. (2004). The importance of high-frequency audibility in the speech and language development of children with hearing loss.Archives of Otolaryngology, Head and Neck Surgery, 130, 556–562. CrossrefGoogle Scholar Stoel-Gammon C. (1988). Prelinguistic vocalizations of hearing-impaired & normally hearing subjects: A comparison of consonantal inventories.Journal of Speech and Hearing Disorders, 53, 302–315. LinkGoogle Scholar Stoel-Gammon C., & Otomo K. (1986). Babbling development of hearing-impaired and normally hearing subjects.Journal of Speech and Hearing Disorders, 51, 33–41. LinkGoogle Scholar Tobey E. A., Geers A. E., Brenner C., Aluna D., & Gabbert G. (2003). Factors associated with development of speech production skills in children implanted by age five.Ear and Hearing, 24 (1S), 36S–45S. CrossrefGoogle Scholar Uchanski R. M., & Geers A. E. (2003). Acoustic characteristics of the speech of young cochlear implant users: A comparison with normal-hearing age-mates.Ear and Hearing, 24 (1S), 90S–105S. CrossrefGoogle Scholar Waldstein R. (1990). Effects of postlingual deafness on speech production: Implications for the role of auditory feedback.Journal of the Acoustical Society of America, 88, 2099–2114. CrossrefGoogle Scholar Waldstein R., & Baum S. (1991). Anticipatory coarticulation in the speech of profoundly hearing-impaired and normally hearing children.Journal of Speech and Hearing Research, 34, 1276–1285. LinkGoogle Scholar Wallace V., Menn L., & Yoshinaga-Itano C. (2000). Is babble the gateway to speech for all children? A longitudinal study of children who are deaf or hard of hearing.Volta Review, 100(5), 121–148. Google Scholar Author Notes Sheila R. Pratt, is in the Department of Communication Science & Disorders at the University of Pittsburgh. Contact her at [email protected]. Advertising Disclaimer | Advertise With Us Advertising Disclaimer | Advertise With Us Additional Resources FiguresSourcesRelatedDetails Volume 10Issue 4March 2005 Get Permissions Add to your Mendeley library History Published in print: Mar 1, 2005 Metrics Downloaded 2,216 times Topicsasha-topicsleader_do_tagasha-article-typesCopyright & Permissions© 2005 American Speech-Language-Hearing AssociationLoading ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call