Abstract

We used visual search to explore whether the preattentive mechanisms that enable rapid detection of facial expressions are driven by visual information from the displacement of features in expressions, or other factors such as affect. We measured search slopes for luminance and contrast equated images of facial expressions and anti-expressions of six emotions (anger, fear, disgust, surprise, happiness, and sadness). Anti-expressions have an equivalent magnitude of facial feature displacements to their corresponding expressions, but different affective content. There was a strong correlation between these search slopes and the magnitude of feature displacements in expressions and anti-expressions, indicating feature displacement had an effect on search performance. There were significant differences between search slopes for expressions and anti-expressions of happiness, sadness, anger, and surprise, which could not be explained in terms of feature differences, suggesting preattentive mechanisms were sensitive to other factors. A categorization task confirmed that the affective content of expressions and anti-expressions of each of these emotions were different, suggesting signals of affect might well have been influencing attention and search performance. Our results support a picture in which preattentive mechanisms may be driven by factors at a number of levels, including affect and the magnitude of feature displacement. We note that indirect effects of feature displacement, such as changes in local contrast, may well affect preattentive processing. These are most likely to be nonlinearly related to feature displacement and are, we argue, an important consideration for any study using images of expression to explore how affect guides attention. We also note that indirect effects of feature displacement (for example, changes in local contrast) may well affect preattentive processing. We argue that such effects are an important consideration for any study using images of expression to explore how affect guides attention.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call