• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Vowel Data Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
89 Articles

Published in last 50 years

Related Topics

  • American English Vowels
  • American English Vowels
  • English Vowels
  • English Vowels
  • Synthetic Vowels
  • Synthetic Vowels
  • Nasal Consonants
  • Nasal Consonants
  • Stressed Vowels
  • Stressed Vowels

Articles published on Vowel Data

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
93 Search results
Sort by
Recency
Helmholtz: The Beginning of Modern Voice Acoustics

Abstract: Voice acoustics examines resonances, formants, spectral slopes and many other factors. The beginnings of this science date to the writings of visionary physicist Hermann Helmholtz (1821–1894). His research predated high-tech computer analysis but his contributions to the science of singing are unparalleled. When examined with modern tools, his work is astonishingly accurate and relevant to our contemporary understanding of the singing voice. In this article the authors examine singing voice acoustics according to Helmholtz, provide a modern tutorial of source-filter theory, and analyze Helmholtz’s vowel data with current tools.

Read full abstract
  • Journal IconJournal of Singing
  • Publication Date IconMay 1, 2025
  • Author Icon Christian T Herbst + 1
Cite IconCite
Chat PDF IconChat PDF
Save

Advanced optimization strategies for combining acoustic features and speech recognition error rates in multi-stage classification of Parkinson's disease severity.

Recent research has made significant progress with definitively identifying individuals with Parkinson's disease (PD) using speech analysis techniques. However, these studies have often treated the early and advanced stages of PD as equivalent, overlooking the distinct speech impairments and symptoms that can vary significantly across the various stages. This research aims to enhance diagnostic accuracy by utilizing advanced optimization strategies to combine speech recognition results (character error rates) with the acoustic features of vowels for more rigorous diagnostic precision. The dysphonia features of three sustained Korean vowels /아/ (a), /이/ (i), and /우/ (u) were examined for their diversity and strong correlations. Four recognized machine-learning classifiers: Random Forest, Support Vector Machine, k-Nearest Neighbors, and Multi-Layer Perceptron, were employed for consistent and reliable analysis. By fine-tuning the Whisper model specifically for PD speech recognition and optimizing it for each severity level of PD, we significantly improved the discernibility between PD severity levels. This enhancement, when combined with vowel data, allowed for a more precise classification, achieving an improved detection accuracy of 5.87% for a 3-level severity classification over the PD "ON"-state dataset, and an improved detection accuracy of 7.8% for a 3-level severity classification over the PD "OFF"-state dataset. This comprehensive approach not only evaluates the effectiveness of different feature extraction methods but also minimizes the variance across final classification models, thus detecting varying severity levels of PD more effectively.

Read full abstract
  • Journal IconBiomedical engineering letters
  • Publication Date IconMar 7, 2025
  • Author Icon S I M M Raton Mondol + 2
Cite IconCite
Chat PDF IconChat PDF
Save

An ultrasound investigation of Hnaring Lutuv (high?) vowels

Lutuv (also known as Lautu) is an under-documented Chin language from the Tibeto-Burman language family spoken by 18,000 people, both in Chin State in western Burma and in diaspora communities worldwide, including approximately 1000 people in the Indianapolis Chin refugee community. Lutuv utilizes a typologically rare six-way contrast in the higher part of the vowel space (i y ɨ ʉ ɯ u, see Bohnert et al. 2022), with an additional four high diphthongized vowels (ie̯ yə̯ ɯə̯ uo̯). Previous work has also identified that the high central vowels (/ɨ ʉ/) are poorly disambiguated—acoustically, they show considerable overlap with both each other and the high back vowels, and in terms of lip posture, they do not display the characteristics of a typical rounding contrast (Bohnert & Berkson 2023). The present work utilizes 3D ultrasonography to provide detailed lingual articulatory data of Lutuv vowels with special attention paid to the high central vowels, adding a new dimension to the existing acoustic and articulatory data. Real-time images of tongue position and motion provide new insights into the complex articulatory gestures involved in the production of these sounds and constitute the first ultrasound investigation into this underdocumented language.

Read full abstract
  • Journal IconThe Journal of the Acoustical Society of America
  • Publication Date IconMar 1, 2024
  • Author Icon Grayson Ziegler + 3
Cite IconCite
Chat PDF IconChat PDF
Save

The Relationship Between Acoustic and Kinematic Vowel Space Areas With and Without Normalization for Speakers With and Without Dysarthria.

Few studies have reported on the vowel space area (VSA) in both acoustic and kinematic domains. This study examined acoustic and kinematic VSAs for speakers with and without dysarthria and evaluated effects of normalization on acoustic and kinematic VSAs and the relationship between these measures. Vowel data from 12 speakers with and without dysarthria, presenting with a range of speech abilities, were examined. The speakers included four speakers with Parkinson's disease (PD), four speakers with brain injury (BI), and four neurotypical (NT) speakers. Speech acoustic and kinematic data were acquired simultaneously using electromagnetic articulography during a passage reading task. Raw and normalized VSAs calculated from corner vowels /i/, /æ/, /ɑ/, and /u/ were evaluated. Normalization was achieved through z score transformations to the acoustic and kinematic data. The effect of normalization on variability within and across groups was evaluated. Regression analysis was used across speakers to assess the association between acoustic and kinematic VSAs for both raw and normalized data. When evaluating the speakers as three different groups (i.e., PD, BI, and NT), normalization reduced the standard deviations within each group and changed the relative differences in average magnitude between groups. Regression analysis revealed a significant relationship between normalized, but not raw, acoustic and kinematic VSAs, after the exclusion of an outlier speaker. Normalization reduces the variability across speakers, within groups, and changes average magnitudes affecting speaker group comparisons. Normalization also influences the correlation between acoustic and kinematic measures. Further investigation of the impact of normalization techniques upon acoustic and kinematic measures is warranted. https://doi.org/10.23641/asha.22669747.

Read full abstract
  • Journal IconAmerican Journal of Speech-Language Pathology
  • Publication Date IconApr 27, 2023
  • Author Icon Christina Kuo + 1
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Vowel Production in Children and Adults With Down Syndrome: Fundamental and Formant Frequencies of the Corner Vowels.

Atypical vowel production contributes to reduced speech intelligibility in children and adults with Down syndrome (DS). This study compares the acoustic data of the corner vowels /i/, /u/, /æ/, and /ɑ/ from speakers with DS against typically developing/developed (TD) speakers. Measurements of the fundamental frequency (f o) and first four formant frequencies (F1-F4) were obtained from single word recordings containing the target vowels from 81 participants with DS (ages 3-54 years) and 293 TD speakers (ages 4-92 years), all native speakers of English. The data were used to construct developmental trajectories and to determine interspeaker and intraspeaker variability. Trajectories for DS differed from TD based on age and sex, but the groups were similar with the striking change in f o and F1-F4 frequencies around age 10 years. Findings confirm higher f o in DS, and vowel-specific differences between DS and TD in F1 and F2 frequencies, but not F3 and F4. The measure of F2 differences of front-versus-back vowels was more sensitive of compression than reduced vowel space area/centralization across age and sex. Low vowels had more pronounced F2 compression as related to reduced speech intelligibility. Intraspeaker variability was significantly greater for DS than TD for nearly all frequency values across age. Vowel production differences between DS and TD are age- and sex-specific, which helps explain contradictory results in previous studies. Increased intraspeaker variability across age in DS confirms the presence of a persisting motor speech disorder. Atypical vowel production in DS is common and related to dysmorphology, delayed development, and disordered motor control.

Read full abstract
  • Journal IconJournal of Speech, Language, and Hearing Research
  • Publication Date IconApr 4, 2023
  • Author Icon Houri K Vorperian + 3
Cite IconCite
Chat PDF IconChat PDF
Save

Evaluating normalization accounts against the dense vowel space of Stockholm Swedish

Talkers vary in the phonetic realization of their vowels. One influential hypothesis holds that listeners overcome this inter-talker variability through pre-linguistic auditory mechanisms that normalize the acoustic or phonetic cues that form the input to speech recognition. Dozens of competing normalization accounts exist —including both vowel-specific (e.g., Lobanov, 1971; Nearey, 1978; Syrdal and Gopal, 1986) and general-purpose accounts applicable to any type of phonetic cue (McMurray and Jongman, 2011). We add to the cross-linguistic literature by comparing normalization accounts against a new database of Swedish, a language with a particularly dense vowel inventory of 21 vowels differing in quality and quantity. We train Bayesian ideal observers (IOs) on unnormalized or normalized vowel data under different assumptions about the relevant cues to vowel identity (F0-F3, vowel duration), and evaluate their performance in predicting the category intended by talker. The results indicate that the best-performing normalization accounts centered and/or scaled formants by talker (e.g., Lobanov), replicating previous findings for other languages with less dense vowel spaces. The relative advantage of Lobanov decreased when including additional cues, indicating that simple centering relative to the talker’s mean might be sufficient to achieve robust inter-talker perception (e.g., C-CuRE).

Read full abstract
  • Journal IconThe Journal of the Acoustical Society of America
  • Publication Date IconMar 1, 2023
  • Author Icon Anna Persson + 1
Cite IconCite
Chat PDF IconChat PDF
Save

A Pattern Classification Model for Vowel Data Using Fuzzy Nearest Neighbor

Classification of the patterns is a crucial structure of research and applications. Using fuzzy set theory, classifying the patterns has become of great interest because of its ability to understand the parameters. One of the problems observed in the fuzzification of an unknown pattern is that importance is given only to the known patterns but not to their features. In contrast, features of the patterns play an essential role when their respective patterns overlap. In this paper, an optimal fuzzy nearest neighbor model has been introduced in which a fuzzification process has been carried out for the unknown pattern using k nearest neighbor. With the help of the fuzzification process, the membership matrix has been formed. In this membership matrix, fuzzification has been carried out of the features of the unknown pattern. Classification results are verified on a completely llabelled Telugu vowel data set, and the accuracy is compared with the different models and the fuzzy k nearest neighbor algorithm. The proposed model gives 84.86% accuracy on 50% training data set and 89.35% accuracy on 80% training data set. The proposed classifier learns well enough with a small amount of training data, resulting in an efficient and faster approach.

Read full abstract
  • Journal IconIntelligent Automation & Soft Computing
  • Publication Date IconJan 1, 2023
  • Author Icon Monika Khandelwal + 6
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Contributions of natural signal statistics to spectral context effects in consonant categorization.

Speech perception, like all perception, takes place in context. Recognition of a given speech sound is influenced by the acoustic properties of surrounding sounds. When the spectral composition of earlier (context) sounds (e.g., a sentence with more energy at lower third formant [F3] frequencies) differs from that of a later (target) sound (e.g., consonant with intermediate F3 onset frequency), the auditory system magnifies this difference, biasing target categorization (e.g., towards higher-F3-onset /d/). Historically, these studies used filters to force context stimuli to possess certain spectral compositions. Recently, these effects were produced using unfiltered context sounds that already possessed the desired spectral compositions (Stilp & Assgari, 2019, Attention, Perception, & Psychophysics, 81, 2037-2052). Here, this natural signal statistics approach is extended to consonant categorization (/g/-/d/). Context sentences were either unfiltered (already possessing the desired spectral composition) or filtered (to imbue specific spectral characteristics). Long-term spectral characteristics of unfiltered contexts were poor predictors of shifts in consonant categorization, but short-term characteristics (last 475 ms) were excellent predictors. This diverges from vowel data, where long-term and shorter-term intervals (last 1,000 ms) were equally strong predictors. Thus, time scale plays a critical role in how listeners attune to signal statistics in the acoustic environment.

Read full abstract
  • Journal IconAttention, perception & psychophysics
  • Publication Date IconMay 13, 2021
  • Author Icon Christian E Stilp + 1
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Perceptual validation of vowel normalization methods for variationist research

Abstract The evaluation of normalization methods sometimes focuses on the maximization of vowel-space similarity. This focus can lead to the adoption of methods that erase legitimate phonetic variation from our data, that is, overnormalization. First, a production corpus is presented that highlights three types of variation in formant patterns: uniform scaling, nonuniform scaling, and centralization. Then the results of two perceptual experiments are presented, both suggesting that listeners tend to ignore variation according to uniform scaling, while associating nonuniform scaling and centralization with phonetic differences. Overall, results suggest that normalization methods that remove variation not according to uniform scaling can remove legitimate phonetic variation from vowel formant data. As a result, although these methods can provide more similar vowel spaces, they do so by erasing phonetic variation from vowel data that may be socially and linguistically meaningful, including a potential male-female difference in the low vowels in our corpus.

Read full abstract
  • Journal IconLanguage Variation and Change
  • Publication Date IconMar 1, 2021
  • Author Icon Santiago Barreda
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Features Selection Based ABC-SVM and PSO-SVM in Classification Problem

Feature selection can be used to improve the performance of classification algorithms. This study aims to use algorithms for feature selection in classification problems. The method in feature selection use to the Artificial Bee Colony (ABC) algorithm and Particle Swarm Optimization (PSO) based on Support Vector Machine (SVM). The ABC-SVM algorithm functions as a method of feature selection to choose the optimal subset according to the objectives set to provide the results of the classification. Then, the PSO-SVM as a comparison method in other feature selection. The results of the classification conducted by PSO-SVM with the vowel dataset is good classification (AUC 0.873), when compared with the ABC-SVM with the classification result is superior (AUC 0.996). The results of the Precision Recall and F-Measure calculations on the PSO-VSM algorithm have good classification results for sonar and wavefrom data sets. Meanwhile, the results of tests conducted by ABC-SVM get superior value from the classifier quality efficiency in the vowel data set.

Read full abstract
  • Journal IconInternational Journal of Innovative Technology and Exploring Engineering
  • Publication Date IconOct 30, 2019
  • Author Icon Mochamad Wahyudi + 2
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Glottal Inverse Filtering Using Probabilistic Weighted Linear Prediction

Glottal inverse filtering is a noninvasive method for getting the glottal flow estimate from the speech. In this paper, we propose a method for glottal inverse filtering based on probabilistic weighted linear prediction PWLP in which the speech is assumed to be the output of an all-pole filter with glottal flow as an excitation. First, we introduce a probabilistic interpretation of the WLP, and we propose a probabilistic temporal weighting as convolution of a binary vector and a fixed window. We construct the posterior distribution based on the PWLP likelihood and a Gaussian prior on the filter coefficients. The parameters are estimated using the Gibbs sampling. The experiments are performed using the Lijencrants–Fant LF model based synthetic data, a physical model based synthetic data of different vowels and real speech data. Results demonstrate that the proposed method outperforms the best of the existing state-of-the-art methods in terms of the normalized amplitude quotient by 0.035 and 0.12 for the LF model and physical model based synthetic data, respectively. The results based on real speech data show that the glottal flow estimated by the proposed method in the closed phase is flatter and has less formant ripple compared to existing state-of-the-art methods. We also show two key features of the proposed method: first, the proposed method does not need prior detection of glottal closure or opening instants. The temporal weights are learnt in a data-driven manner, which is often found to be high near the closed phase of the glottal cycle, second, the Gaussian prior helps in estimating the filter coefficients when the closed phase duration is small.

Read full abstract
  • Journal IconIEEE/ACM Transactions on Audio, Speech, and Language Processing
  • Publication Date IconJan 1, 2019
  • Author Icon Achuth Rao M.V + 1
Cite IconCite
Chat PDF IconChat PDF
Save

A regression approach to vowel normalization for missing and unbalanced data.

Researchers investigating the vowel systems of languages or dialects frequently employ normalization methods to minimize between-speaker variability in formant patterns while preserving between-phoneme separation and (socio-)dialectal variation. Here two methods are considered: log-mean and Lobanov normalization. Although both of these methods express formants in a speaker-dependent space, the methods differ in their complexity and in their implied models of human vowel-perception. Typical implementations of these methods rely on balanced data across speakers so that researchers may have to reduce the data available in the analyses in missing-data situations. Here, an alternative method is proposed for the normalization of vowels using the log-mean method in a linear-regression framework. The performance of the traditional approaches to log-mean and Lobanov normalization against the regression approach to the log-mean method using naturalistic, simulated vowel-data was investigated. The results indicate that the Lobanov method likely removes legitimate linguistic variation from vowel data and often provides very noisy estimates of the actual vowel quality associated with individual tokens. The authors further argue that the Lobanov method is too complex to represent a plausible model of human vowel perception, and so is unlikely to provide results that reflect the true perceptual organization of linguistic data.

Read full abstract
  • Journal IconThe Journal of the Acoustical Society of America
  • Publication Date IconJul 1, 2018
  • Author Icon Santiago Barreda + 1
Cite IconCite
Chat PDF IconChat PDF
Save

An acoustic-articulatory study of bilingual vowel production: Advanced tongue root vowels in Twi and tense/lax vowels in Ghanaian English

An acoustic-articulatory study of bilingual vowel production: Advanced tongue root vowels in Twi and tense/lax vowels in Ghanaian English

Read full abstract
  • Journal IconJournal of Phonetics
  • Publication Date IconApr 6, 2017
  • Author Icon Sam Kirkham + 1
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Intrinsic-cum-extrinsic normalization of formant data of vowels.

Using a known speaker-intrinsic normalization procedure, formant data are scaled by the reciprocal of the geometric mean of the first three formant frequencies. This reduces the influence of the talker but results in a distorted vowel space. The proposed speaker-extrinsic procedure re-scales the normalized values by the mean formant values of vowels. When tested on the formant data of vowels published by Peterson and Barney, the combined approach leads to well separated clusters by reducing the spread due to talkers. The proposed procedure performs better than two top-ranked normalization procedures based on the accuracy of vowel classification as the objective measure.

Read full abstract
  • Journal IconThe Journal of the Acoustical Society of America
  • Publication Date IconNov 1, 2016
  • Author Icon Ananthapadmanabha T V + 1
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Speech analysis-synthesis system using principal components of vowel spectra

We have studied an effective method using principal components spanning a feature space of isolated vowels. A covariance matrix is calculated from many log-amplitude spectra of isolated vowels uttered by a speaker. An eigen equation of the covariance matrix is solved. The resulting eigenvectors are called principal vectors. In the analysis system, log-amplitude spectrum for each frame of a word uttered by the same speaker is transformed to the components on the principal vectors. In the synthesis system, a log-amplitude spectrum is reconstructed using the components on the principal vectors with the largest eigenvalues and the spoken word is synthesized using the LMA filter. We draw the distribution chart of the first and the second principal components extracted from Japanese vowel data. This figure was very similar to the F1—F2 distribution of vowels and so to the vowel classification map in coordinate axes of the degree of constriction and the tongue hump position. The Listening tests showed that the q...

Read full abstract
  • Journal IconThe Journal of the Acoustical Society of America
  • Publication Date IconOct 1, 2016
  • Author Icon Tomio Takara + 3
Cite IconCite
Chat PDF IconChat PDF
Save

Effects of Physiological Internal Noise on Model Predictions of Concurrent Vowel Identification for Normal-Hearing Listeners.

Previous studies have shown that concurrent vowel identification improves with increasing temporal onset asynchrony of the vowels, even if the vowels have the same fundamental frequency. The current study investigated the possible underlying neural processing involved in concurrent vowel perception. The individual vowel stimuli from a previously published study were used as inputs for a phenomenological auditory-nerve (AN) model. Spectrotemporal representations of simulated neural excitation patterns were constructed (i.e., neurograms) and then matched quantitatively with the neurograms of the single vowels using the Neurogram Similarity Index Measure (NSIM). A novel computational decision model was used to predict concurrent vowel identification. To facilitate optimum matches between the model predictions and the behavioral human data, internal noise was added at either neurogram generation or neurogram matching using the NSIM procedure. The best fit to the behavioral data was achieved with a signal-to-noise ratio (SNR) of 8 dB for internal noise added at the neurogram but with a much smaller amount of internal noise (SNR of 60 dB) for internal noise added at the level of the NSIM computations. The results suggest that accurate modeling of concurrent vowel data from listeners with normal hearing may partly depend on internal noise and where internal noise is hypothesized to occur during the concurrent vowel identification process.

Read full abstract
  • Journal IconPloS one
  • Publication Date IconFeb 11, 2016
  • Author Icon + 3
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Internal Boundaries and Individual Differences: /Aʊ Raising in Vermont

Previous research on speech in Vermont has revealed evidence of dialect leveling but with much individual variation. the purpose of the current study of the raising and fronting of /aʊ/ in Vermont is to explore this residual variation with an eye toward offering interpretations based upon group and personal identity and affiliation. broad demographically based analyses as well as the examination of interview content and individual vowel data provide insight into the process of dialect leveling in Vermont. Specifically, in addition to such demographic factors as age and gender, overall life affiliation differences and individual life choices gleaned from interview material are helpful in disambiguating what would otherwise seem to be anomalous variable features. this combination of methods may be particularly useful in situations of economic and cultural change.

Read full abstract
  • Journal IconAmerican Speech
  • Publication Date IconFeb 1, 2016
  • Author Icon Julie Roberts
Cite IconCite
Chat PDF IconChat PDF
Save

Vowels in Wunambal, a Language of the North West Kimberley Region

This paper presents an acoustic-phonetic analysis of vowel data from recordings of Wunambal, a Worrorran language of the Kimberley region in North West Australia. Wunambal has been analysed as a six vowel system with the contrasts /i e a o u ɨ/, with /ɨ/ only found in the Northern variety. Recordings from three senior (60+) male speakers of Northern Wunambal were used for this study. These recordings were originally made for documentation of lexical items. All vowel tokens were drawn from words in short carrier phrases, or words in isolation, and we compare vowels from both accented and unaccented contexts. We demonstrate a remarkably symmetrical vowel space, highlighting where the six vowels lie acoustically in relation to each other for the three speakers overall, and for each speaker individually. While all speakers in our corpus used the /ɨ/ vowel, the allophony observed suggests that it has a somewhat different phonemic status than other vowels. Accented and unaccented vowels are not significantly different for any speaker, and are similarly distributed in acoustic space.

Read full abstract
  • Journal IconAustralian Journal of Linguistics
  • Publication Date IconApr 7, 2015
  • Author Icon Deborah Loakes + 3
Cite IconCite
Chat PDF IconChat PDF
Save

Competences in contact

This article examines phonological changes brought about by creole-lexifier contact, with secondary focus on the distinction of these changes from those occurring in creole formation. It is argued that lexifier-targeted change involves declarative competence: knowledge of what is and isn’t part of a phonological inventory. It is further argued that such changes do not undo the past, but involve historically innovative modifications to grammatical competence, which subsequently inform productive and perceptual knowledge. A formal account of Guadeloupian vowel data is proposed, which also addresses differential outcomes such as instances of apparent hypercorrection.

Read full abstract
  • Journal IconJournal of Pidgin and Creole Languages
  • Publication Date IconApr 7, 2015
  • Author Icon Eric Russell
Cite IconCite
Chat PDF IconChat PDF
Save

Formant frequencies and bandwidths of the vocal tract transfer function are affected by the mechanical impedance of the vocal tract wall.

The acoustical properties of the vocal tract, the air-filled cavity between the vocal folds and the mouth opening, are determined by its individual geometry, the physical properties of the air and of its boundaries. In this article, we address the necessity of complex impedance boundary conditions at the mouth opening and at the border of the acoustical domain inside the human vocal tract. Using finite element models based on MRI data for spoken and sung vowels /a/, /i/ and // and comparison of the transfer characteristics by analysis of acoustical data using an inverse filtering method, the global wall impedance showed a frequency-dependent behaviour and depends on the produced vowel and therefore on the individual vocal tract geometry. The values of the normalised inertial component (represented by the imaginary part of the impedance) ranged from 250,hbox {g}/hbox {m}^{2} at frequencies higher than about 3 kHz up to about 2.5times 10^{5},hbox {g}/hbox {m}^{2} in the mid-frequency range around 1.5–3 kHz. In contrast, the normalised dissipation (represented by the real part of the impedance) ranged from 65 to 4.5times 10^{5},hbox {Ns}/hbox {m}^{3}. These results indicate that structures enclosing the vocal tract (e.g. oral and pharyngeal mucosa and muscle tissues), especially their mechanical properties, influence the transfer of the acoustical energy and the position and bandwidth of the formant frequencies. It implies that the timbre characteristics of vowel sounds are likely to be tuned by specific control of relaxation and strain of the surrounding structures of the vocal tract.

Read full abstract
  • Journal IconBiomechanics and Modeling in Mechanobiology
  • Publication Date IconNov 23, 2014
  • Author Icon Mario Fleischer + 4
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers