Abstract

BackgroundComputer vision combined with human annotation could offer a novel method for exploring facial expression (FE) dynamics in children with autism spectrum disorder (ASD).MethodsWe recruited 157 children with typical development (TD) and 36 children with ASD in Paris and Nice to perform two experimental tasks to produce FEs with emotional valence. FEs were explored by judging ratings and by random forest (RF) classifiers. To do so, we located a set of 49 facial landmarks in the task videos, we generated a set of geometric and appearance features and we used RF classifiers to explore how children with ASD differed from TD children when producing FEs.ResultsUsing multivariate models including other factors known to predict FEs (age, gender, intellectual quotient, emotion subtype, cultural background), ratings from expert raters showed that children with ASD had more difficulty producing FEs than TD children. In addition, when we explored how RF classifiers performed, we found that classification tasks, except for those for sadness, were highly accurate and that RF classifiers needed more facial landmarks to achieve the best classification for children with ASD. Confusion matrices showed that when RF classifiers were tested in children with ASD, anger was often confounded with happiness.LimitationsThe sample size of the group of children with ASD was lower than that of the group of TD children. By using several control calculations, we tried to compensate for this limitation.ConclusionChildren with ASD have more difficulty producing socially meaningful FEs. The computer vision methods we used to explore FE dynamics also highlight that the production of FEs in children with ASD carries more ambiguity.

Highlights

  • The understanding of our facial expressions (FEs) by others is crucial for social interaction

  • The computer vision methods we used to explore FE dynamics highlight that the production of FEs in children with autism spectrum disorder (ASD) carries more ambiguity

  • Adult-like FE physiognomies appear progressively, and the learning of FEs increases even in late childhood: the ability to produce FE continuously improves between 5 and 13 years of age and 13-year-old adolescents do not yet produce all FEs perfectly [5, 6]. Their production is influenced by several factors: (1) emotional valence [6,7,8]; (2) gender, as girls tend to produce positive emotions more than boys, and boys tend to produce anger more than girls [9]; (3) the type of task, as this factor modulates the quality of FEs in children [6]; and (4) ethnic and cultural factors [10, 11]

Read more

Summary

Introduction

The understanding of our facial expressions (FEs) by others is crucial for social interaction. Adult-like FE physiognomies appear progressively, and the learning of FEs increases even in late childhood: the ability to produce FE continuously improves between 5 and 13 years of age and 13-year-old adolescents do not yet produce all FEs perfectly [5, 6] Their production is influenced by several factors: (1) emotional valence (e.g. positive emotions are easier to produce than negative emotions) [6,7,8]; (2) gender, as girls tend to produce positive emotions more than boys, and boys tend to produce anger more than girls [9]; (3) the type of task, as this factor modulates the quality of FEs in children (e.g. children are better with request tasks than with imitation tasks) [6]; and (4) ethnic and cultural factors (e.g. cross-cultural studies indicate that the intensity of spontaneous expression is not universal and may vary across cultures) [10, 11]. Computer vision combined with human annotation could offer a novel method for exploring facial expression (FE) dynamics in children with autism spectrum disorder (ASD)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call