Abstract

Video face-tracking software, such as OpenFace 2.0, can be used to make inferences about facial muscle activation [Baltrušaitis et al. IEEE 13, 59–66 (2018)]. However, the accuracy of these inferences based on the facial action units (FAUs) calculated by OpenFace 2.0 compared to corresponding muscle activity during speech is unclear. A previous study investigated muscle activation when both smile and speech occur simultaneously, focusing on the zygomaticus major (ZM) muscle [Liu et al. ISSP, 130–133 (2021)], but only presented data for a single speaker and did not compare FAU and EMG results. The present study compares OpenFace 2.0 action units with surface electromyography (EMG) data during speech in order to assess the validity of these inferences about facial muscle activation. We compare ZM activity and the lip corner puller FAU intensity results from a dataset collected for the previously mentioned [Liu et al. 2021] study. Data include four speakers producing read speech in smile conditions; 2 s will be extracted from EMG and FAU data before and after each utterance. Results will be reported on the relative accuracy of the FAU and EMG data. Implications will be discussed for speech communication research. [Work supported by NSERC.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call