Abstract

Some years ago an improved design (the “complete design”) was proposed to assess the composite face effect in terms of a congruency effect, defined as the performance difference for congruent and incongruent target to no-target relationships (Cheung et al., 2008). In a recent paper Rossion (2013) questioned whether the congruency effect was a valid hallmark of perceptual integration, because it may contain confounds with face-unspecific interference effects. Here we argue that the complete design is well-balanced and allows one to separate face-specific from face-unspecific effects. We used the complete design for a same/different composite stimulus matching task with face and non-face objects (watches). Subjects performed the task with and without trial-by-trial feedback, and with low and high certainty about the target half. Results showed large congruency effects for faces, particularly when subjects were informed late in the trial about which face halves had to be matched. Analysis of response bias revealed that subjects preferred the “different” response in incongruent trials, which is expected when upper and lower face halves are integrated perceptually at the encoding stage. The results pattern was observed in the absence of feedback, while providing feedback generally attenuated the congruency effect, and led to an avoidance of response bias. For watches no or marginal congruency effects and a moderate global “same” bias were observed. We conclude that the congruency effect, when complemented by an evaluation of response bias, is a valid hallmark of feature integration that allows one to separate faces from non-face objects.

Highlights

  • A common observation in face perception or recognition experiments is that observers have difficulty judging face parts independently

  • The researchers found that the N170, a face-selective ERP component (Bentin et al, 1996; Itier and Taylor, 2004; Rousselet et al, 2004, 2008; Jacques and Rossion, 2009), was jointly modulated by cars and faces among car experts, which indicates that integrated encoding of object features may have a common sensory basis in objects of expertise

  • It has been shown that the complete design can be used to derive testable predictions for the mechanisms of facial feature integration, which can be contrasted against results for non-facial objects

Read more

Summary

Introduction

A common observation in face perception or recognition experiments is that observers have difficulty judging face parts independently. The stronger integration of parts for faces compared to non-face objects was substantiated in subsequent studies using classic hallmarks of feature integration (Gauthier et al, 1998; Yovel and Kanwisher, 2004; Kanwisher and Yovel, 2006; Robbins and McKone, 2007; Macchi Cassia et al, 2009; Taubert, 2009; Meinhardt-Injac, 2013). The researchers found that the N170, a face-selective ERP component (Bentin et al, 1996; Itier and Taylor, 2004; Rousselet et al., 2004, 2008; Jacques and Rossion, 2009), was jointly modulated by cars and faces among car experts, which indicates that integrated encoding of object features may have a common sensory basis in objects of expertise. Albeit the dispute about the role of expertise there is consensus that faces and non-face objects differ in their degree of part integration when high degrees of familiarity, expertise or training are not involved (Gauthier et al, 2003; McKone et al, 2006; Rossion, 2013)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call