Abstract

We used a happy/sad classification task and a psychophysical model to study the tuning properties of facial expression processors across viewing conditions. Using morphed faces, in this study we measured the extent to which classification of facial expressions depends on the intensity of a particular expression on either the upper or lower face. In the fovea, the upper and lower parts of the test image were either aligned or had a lateral shift of 44′ visual angle. In the periphery, the aligned test image was placed at a 6° visual angle to the left of the fixation. Observers were asked to classify a test image of a facial expression as happy or sad. We discovered that the alignment of the upper and lower halves of the face had no effect on happy/sad classification in the fovea, suggesting that the classification of facial expressions is an analytic process. The model also showed no interaction between the two halves of the face in foveal facial expression classification. In addition, the poor performance of observers in recognizing happiness in the periphery manifests a computational complexity, suggesting a model in which the happy-face processor relies on both facial features and the interaction between them to recognize happiness in the periphery.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.