Abstract
Three experiments investigated the perception of facial displays of emotions. Using a morphing technique, Experiment 1 (identification task) and Experiment 2 (ABX discrimination task) evaluated the merits of categorical and dimensional models of the representation of these stimuli. We argue that basic emotions—as they are usually defined verbally—do not correspond to primary perceptual categories emerging from the visual analysis of facial expressions. Instead, the results are compatible with the hypothesis that facial expressions are coded in a continuous anisotropic space structured by valence axes. Experiment 3 (identification task) introduces a new technique for generating chimeras to address the debate between feature-based and holistic models of the processing of facial expressions. Contrary to the pure holistic hypothesis, the results suggest that an independent assessment of discrimination features is possible, and may be sufficient for identifying expressions even when the global facial configuration is ambiguous. However, they also suggest that top-down processing may improve identification accuracy by assessing the coherence of local features.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.