Abstract

Creaky voice is a voice quality in which a low amount of subglottal air pressure, a condensed vocal fold structure, and a high closed quotient of vibration combine to create the auditory percept of a series of pulses at a low pitch. While this voice quality is often nonpathological, it can also co-occur with vocal pathologies. Identification of creak in the speech signal is most often done manually. Automatic creak detection algorithms have been created to streamline and produce replicable workflows. These algorithms have steadily increased in reliability, with COVAREP (Degottex et al., 2014) as the newest state-of-the-art. While preliminary studies have demonstrated promising findings using artificial neural networks with clinical data, artificial neural networks typically improve with diverse data testing. The current study implements COVAREP on a novel dataset, both in terms of speakers and speech types. Deidentified patient diagnoses were matched to audio recordings collected from January 2021 through September 2023. Relevant portions of audio recordings were extracted using a Praat script, and COVAREP was implemented on the extracted audio files in MATLAB. Ongoing analyses correlating percentage of creak detected and vocal pathology diagnoses will be discussed. Finally, the results will be compared to those of previous work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call