Abstract

<p id=C2>Recent studies on face processing have shown our sensitivity to changes in facial configural and featural information. However, to our knowledge, the integration of the two types of facial information is poorly understood. To this end, this study explored the integration of facial information cross configural and featural dimensions within specific facial regions (i.e., eyes and mouth). <break/>The theoretical hypothesis includes (1) If participants can integrate facial information in both facial configural and featural dimensions, they should be more sensitive to changes in dual-dimension information as opposed to those in a single-dimension, that is a <italic>cross-dimension covariation enhancement effect</italic>; (2) The “cross-dimension covariation enhancement effect” should be face region-selective: It is expected to be stronger in the eyes region than the mouth region; (3) Face inversion should impair the “cross-dimension covariation enhancement effect”. To test these predictions, we designed two 3 (facial information type: configural change, featural change, both change) × 2 (face orientation: upright, inversion) experiments for eye region and mouth region information change respectively. Participants sensitivity to information change was measured in a 2-face discrimination task. <break/>Results revealed that (1) participants were more sensitive to “dual” change in eye region as compared to changes in either single configural or featural information; (2) this effect is both orientation-specific (i.e., no effect was found in eye region when faces were inverted) and region-specific (i.e., no effect was found in mouth region regardless of face orientation), suggesting that this effect cannot be simply explained by the extra facial information changes in the “dual” condition; (3) When single facial information was altered, face inversion reduced the detection of facial information changes in the mouth region, but not those changes in the eyes region. <break/>In sum, our findings showed that face cross-dimension (i.e., configural and featural) information integration occurred in the eye region of upright faces, but not in the mouth region or inverted faces. The face orientation-specificity and facial region-specificity suggested that the integration happens at facial-region level, possibly involving face holistic processing. The traditional face holistic processing hypothesis emphasized integrating facial information across whole face region. The current findings suggest that face region might act as a key component in the framework of holistic face processing theory. Finally, by revisiting the “perceptual field” hypothesis, the “expertise area” hypothesis, and the “region-selective holistic processing” hypothesis, we discussed an eye region-centered, hierarchical, multi-dimensional information integration hypothesis.

Highlights

  • To this end, this study explored the integration of facial information cross configural and featural dimensions within specific facial regions (i.e., eyes and mouth)

  • 除此之外, 在“部件”尺度上, 非“整合加工”的 单维信息变化觉察也表现出眼睛优势:眼睛区域的 敏感度较高, 嘴巴区域的敏感度较低。这些结果与 以 往 研 究 的结 果 相 同 (Sekunova & Barton, 2008; Tanaka, Kaiser, et al, 2014; Tanaka, Quinn, et al, 2014), 都显示出眼睛区域的单维信息分辨具有相 当强的优势。重要的一点是, 以往实验没有刻意控 制被试的知觉偏向或注意策略, 这可能造成实验结 果的混淆。当被试预先不知道应该注意眼睛区域还 是嘴巴区域时, 有可能对眼睛区域投放较多时间与 加工资源, 对嘴巴区域则投放不足; 尤其当面孔倒 置时, 任务难度提高, 为节省认知资源, 很可能会 产生和使用更灵活的注意策略, 例如给嘴巴区域投 放的资源更少, 导致实验结果低估被试对嘴巴区域 信息变化的觉察敏感度。但是本研究用确定的观察 区域、指导语和 block design 最大可能排除了注意 策略造成的干扰, 但实验结果仍然发现眼睛区域的 优势。这是在以往研究的基础上得到了更可靠的结 果。关于眼睛区域的单维信息变化觉察比与嘴巴区 域更敏感的原因, 有可能来自眼睛区域本身的刺激

Read more

Summary

Introduction

面孔整体加工(face holistic processing)也有区 域选择性。面孔整体加工指全脸范围的多维信息被 面孔知觉系统组织成一个整体(综述见 Tanaka & Gordon, 2011)。有实验用组合面孔范式(compositeface task)探测上半脸(眼睛区域)与下半脸(嘴巴区 域)信息变化的相互影响, 发现人们对上半脸(或下 半脸)的识别会自动化地受到下半脸(或上半脸)信 息变化的干扰; 上半脸与下半脸互相影响的强度与 稳定性有显著差别, 具体表现为, 上半脸对下半脸 的影响更稳定, 但是下半脸对上半脸影响的强度会 随着面孔加工专家化程度的上升而增强, 达到甚至 超过上半脸对下半脸的影响(Wang et al, 2019)。 理论层面上也出现了一些新思想。首先, Rossion (2008, 2009)在面孔整体加工领域提出“知觉场”假 设(perceptual field hypothesis), 认为面孔上全脸尺 度的信息整合是以眼睛为中心(而不是完全均匀的), 上半脸(眼睛区域)的信息整合较强, 下半脸(嘴巴区 域)信息整合较弱。基于此, 知觉场假设对经典面孔 倒置效应提出了一种新解释:面孔倒置破坏了上半 脸(眼睛区域)与下半脸(嘴巴区域)之间跨区域的信 息整合; 可能也破坏了下半脸内部的(鼻嘴之间、鼻 子本身或嘴巴本身等)信息整合; 但是面孔倒置没 有破坏上半脸内部(双眼之间、眉眼之间、眼睛本 身等)的信息整合。其次, Tanaka 和 Gordon (2011) 回顾三十多年全脸信息整合的实验后提出“面孔区 域”可能是面孔信息整合的关键调节变量; 并结合 知觉场假设提示面孔倒置不仅破坏了下半脸的信 息整合, 也破坏了信息分辨, 以此来解释上述面孔 维度测试的实验结果(Tanaka, Quinn, et al, 2014)。 综合起来, 上述实验发现与理论分析都提示, 面孔 单维信息变化觉察与全脸多维信息整合的中间尺 度(半脸或区域)可能发生某种信息整合, 单维信息 变化觉察与全脸多维信息知觉整合可能存在某种 联系。然而, 迄今为止尚未有实验发现特异性证据。

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call