Abstract

Over the years, technological advances have enabled speech researchers to directly track the skilled, sound-producing movements of the vocal tract, both intraoral and laryngeal articulators normally hidden from view (the tongue, velum, and glottis) and orofacial articulators directly visible on talkers’ faces (the lips and jaw). Despite these advances, however, no single instrument is capable of concurrently recording movements of all the articulators, which has impeded progress in characterizing inter-articulator control and coordination. To explore how inter-articulator coordination subserves linguistic structure, tools that co-register and temporally align different signals from different recording devices are necessary. This tutorial introduces optimal methods for studying the temporal coordination between laryngeal, intraoral, and orofacial articulators by combining various signals from electromagnetic articulography (EMA), electroglottography (EGG), and audio recordings, and displaying the time-aligned signals in the same analysis space. The multimodal data is processed using a set of MATLAB-based functions, which co-register and display positional and velocity trajectories of the lips, tongue, and jaw in tandem with the EGG waveform, F0 trajectory, and acoustic waveform and spectrogram. The coordination of laryngeal and supralaryngeal speech movements can then be measured and analyzed. [Work supported by an Emerging Research Grant from the Hearing Health Foundation.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call