Abstract

Humans have an extraordinary ability to recognize facial expression and identity from a single face simultaneously and effortlessly, however, the underlying neural computation is not well understood. Here, we optimized a multi-task deep neural network to classify facial expression and identity simultaneously. Under various optimization training strategies, the best-performing model consistently showed ‘share-separate’ organization. The two separate branches of the best-performing model also exhibited distinct abilities to categorize facial expression and identity, and these abilities increased along the facial expression or identity branches toward high layers. By comparing the representational similarities between the best-performing model and functional magnetic resonance imaging (fMRI) responses in the human visual cortex to the same face stimuli, the face-selective posterior superior temporal sulcus (pSTS) in the dorsal visual cortex was significantly correlated with layers in the expression branch of the model, and the anterior inferotemporal cortex (aIT) and anterior fusiform face area (aFFA) in the ventral visual cortex were significantly correlated with layers in the identity branch of the model. Besides, the aFFA and aIT better matched the high layers of the model, while the posterior FFA (pFFA) and occipital facial area (OFA) better matched the middle and early layers of the model, respectively. Overall, our study provides a task-optimization computational model to better understand the neural mechanism underlying face recognition, which suggest that similar to the best-performing model, the human visual system exhibits both dissociated and hierarchical neuroanatomical organization when simultaneously coding facial identity and expression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call