<p id=C2>Recent studies on face processing have shown our sensitivity to changes in facial configural and featural information. However, to our knowledge, the integration of the two types of facial information is poorly understood. To this end, this study explored the integration of facial information cross configural and featural dimensions within specific facial regions (i.e., eyes and mouth). <break/>The theoretical hypothesis includes (1) If participants can integrate facial information in both facial configural and featural dimensions, they should be more sensitive to changes in dual-dimension information as opposed to those in a single-dimension, that is a <italic>cross-dimension covariation enhancement effect</italic>; (2) The “cross-dimension covariation enhancement effect” should be face region-selective: It is expected to be stronger in the eyes region than the mouth region; (3) Face inversion should impair the “cross-dimension covariation enhancement effect”. To test these predictions, we designed two 3 (facial information type: configural change, featural change, both change) × 2 (face orientation: upright, inversion) experiments for eye region and mouth region information change respectively. Participants sensitivity to information change was measured in a 2-face discrimination task. <break/>Results revealed that (1) participants were more sensitive to “dual” change in eye region as compared to changes in either single configural or featural information; (2) this effect is both orientation-specific (i.e., no effect was found in eye region when faces were inverted) and region-specific (i.e., no effect was found in mouth region regardless of face orientation), suggesting that this effect cannot be simply explained by the extra facial information changes in the “dual” condition; (3) When single facial information was altered, face inversion reduced the detection of facial information changes in the mouth region, but not those changes in the eyes region. <break/>In sum, our findings showed that face cross-dimension (i.e., configural and featural) information integration occurred in the eye region of upright faces, but not in the mouth region or inverted faces. The face orientation-specificity and facial region-specificity suggested that the integration happens at facial-region level, possibly involving face holistic processing. The traditional face holistic processing hypothesis emphasized integrating facial information across whole face region. The current findings suggest that face region might act as a key component in the framework of holistic face processing theory. Finally, by revisiting the “perceptual field” hypothesis, the “expertise area” hypothesis, and the “region-selective holistic processing” hypothesis, we discussed an eye region-centered, hierarchical, multi-dimensional information integration hypothesis.