Abstract

Human behavior is the confluence of output from voluntary and involuntary motor systems. The neural activities that mediate behavior, from individual cells to distributed networks, are in a state of constant flux. Artificial intelligence (AI) research over the past decade shows that behavior, in the form of facial muscle activity, can reveal information about fleeting voluntary and involuntary motor system activity related to emotion, pain, and deception. However, the AI algorithms often lack an explanation for their decisions, and learning meaningful representations requires large datasets labeled by a subject-matter expert. Motivated by the success of using facial muscle movements to classify brain states and the importance of learning from small amounts of data, we propose an explainable self-supervised representation-learning paradigm that learns meaningful temporal facial muscle movement patterns from limited samples. We validate our methodology by carrying out comprehensive empirical study to predict future speech behavior in a real-world dataset of adults who stutter (AWS). Our explainability study found facial muscle movements around the eyes (p

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.