Abstract

We propose a new method of quantifying the utility of visual information extracted from facial stimuli for emotion recognition. The stimuli are convolved with a Gaussian fixation distribution estimate, revealing more information in those facial regions the participant fixated on. Feeding this convolution to a machine-learning emotion recognition algorithm yields an error measure (between actual and predicted emotions) reflecting the quality of extracted information. We recorded the eye-movements of 21 participants with autism and 23 age-, sex- and IQ-matched typically developing participants performing three facial analysis tasks: free-viewing, emotion recognition, and brow-mouth width comparison.In the emotion recognition task, fixations of participants with autism were positioned on lower areas of the faces and were less focused on the eyes compared to the typically developing group. Additionally, the utility of information extracted by them in the emotion recognition task was lower. Thus, the emotion recognition deficit typical in autism can be at least partly traced to the earliest stage of face processing, i.e. to the extraction of visual information via eye-fixations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call