Abstract

Background and Objectives The purpose of this study is to evaluate value of diagnostic tool for vocal cord palsy utilizing artificial intelligence without laryngoscopeMaterials and Method A dataset consisting of recordings from patients with unilateral vocal cord paralysis (n=54) as well as normal individuals (n=163). The dataset included prolonged pronunciations of the vowels /ah/, /u/, /i/, and vocal cord data from paralyzed patients. Various acoustic parameters such as Mel-frequency cepstral coefficients, jitter, shimmer, harmonics-to-noise ratio, and fundamental frequency statistics were analyzed. The classification of vocal cord paralysis encompassed paralysis status, paralysis degree, and paralysis location. The deep learning model employed the leave-one-out method, and the feature set with the highest performance was selected using the following methods.Results Vocal Cord Paralysis Classifier: The classifier accurately distinguished normal voice from vocal cord paralysis, achieving an accuracy and F1 score of 1.0. Paralysis Location Classifier: The classifier accurately differentiated between median and paramedian vocal cord paralysis, achieving an accuracy and micro F1 score of 1.0. Breathiness Degree Classifier: The classifier achieved an accuracy of 0.795 and a mean absolute error of 0.2857 in distinguishing different degrees of breathiness.Conclusion Although the small sample size raises concerns of potential overfitting, this preliminary study highlights distinctive acoustic features in cases of unilateral vocal fold paralysis compared to those of normal individuals. These findings suggest the feasibility of determining the presence, degree, and location of paralysis through the utilization of acoustic parameters. Further research is warranted to validate and expand upon these results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call