Abstract
Low-dimensional signals derived from speech allow for more efficient computational analysis than do raw speech signals and have been argued to be involved in neural synchronization to speech (e.g., Assaneo & Poeppel 2018, Giraud et al. 2007). While there are multiple such signals derived from the acoustic domain, little research focuses on low-dimensional signals derived from articulation, which provide a global characterization of movement of the vocal tract (c.f. Orlando & Palo 2023 and Poeppel & Assaneo 2020). We discuss one such signal, the articulatory modulation function (Goldstein 2019 and Campbell et al. 2023). In-progress research has found reduced stability of articulatory modulation in speakers with ALS; thus, the signal may serve as an index of inter-articulator coordination. In this tutorial, we describe uses for and methods of extracting low-dimensional signals from X-Ray Microbeam (XRMB) data (Westbury et al. 1994) and real-time MRI data. While a standard method using XRMB data has been established, no such standard exists for MRI data. We discuss the advantages and disadvantages of two MRI approaches: (1) pre-segmented contours and (2) principal component analysis. [Work supported by NIH and NSF.]
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have