Abstract

Early models of face processing proposed that facial cues indicating a person’s sex, race, and age, were processed separately from variant cues like emotional expression. Additionally, early theories of emotion perception suggested that processing of emotional expressions was unaffected by the situations in which the expressions were encountered. Subsequent research has demonstrated that this is not the case. The processing of emotional expressions is influenced by a range of contextual factors as well as by other social category cues present on the face. However, the manner in which the multiple sources of information on a face are integrated and how this integration is influenced by situational factors is not yet fully understood. Thus, the overall aim of this thesis was to extend our understanding of the influence of higher order cognitive states on the interaction of facial cues and social categories in the face, specifically in the processing of emotional expressions. This thesis describes a range of investigations, across a range of methods (including affective priming, visual search, and categorization) of how explicitly and implicitly activated higher order cognitive states influence the interaction of multiple facial cues and categories. Higher order cognitive states were manipulated explicitly by instructing participants to focus on different kinds of information available within the face. Higher order cognitive states were elicited implicitly by altering the other faces seen at the same time as the target face, altering the other faces seen on different trials within the same task, or altering the other faces seen in recently completed tasks. Using the affective priming method, Chapter 2 demonstrated that implicit evaluations were more strongly influenced by the emotional expression displayed on the face primes than the social category, including race, sex, and age of the face. An influence of the social category was only observed when participants were instructed to focus on this dimension. This demonstrated that the nature of the task can influence the way in which cues like race, sex, age and emotion interact. Using the visual search paradigm, Chapter 3 demonstrated that the nature of the background faces in a visual search task alters which expressions are more quickly detected. Happy faces were detected faster in backgrounds made up of a range of different emotional faces whereas angry faces tended to be detected faster in homogenous backgrounds made up of faces expressing the same emotion. In Chapter 4.1, it was shown that the way facial cues of race influence the categorization of happy and angry emotional expressions depended on the presentation duration, stimulus type and importantly the number of different faces presented within a task. Chapter 4.2 investigated whether this finding could be accounted for with the perceptual load hypothesis and found it to be an unlikely explanation for the different patterns of results observed at small and large set. Finally, Chapter 5 demonstrated that the way facial sex cues influence emotion categorization could be affected by other recently completed tasks. The typical finding of faster categorization of happy than angry expressions displayed on female but not male faces was significantly altered when male and female faces were presented in separate tasks rather than together within the same task. Together, these studies demonstrate that the way multiple facial cues indicating social category membership and emotion interact is flexible and sensitive to changes in task demands – both explicitly through instruction as well as implicitly through the composition of the task. On a practical level these studies highlight the need for considering how the composition of a task might be responsible for producing a particular pattern of results. On a theoretical level, these studies emphasize the necessity of better understanding the full range of situational influences on the interaction of multiple facial cues if we want to predict how a particular expression on a particular face will be processed within a given context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call