ABSTRACT Face detection has been studied by presenting faces in blank displays, object arrays, and real-world scenes. This study investigated whether these display contexts differ in what they can reveal about detection, by comparing frontal-view faces with those shown in profile (Experiment 1), rotated by 90° (Experiment 2), or turned upside-down (Experiment 3). In blank displays, performance for all face conditions was equivalent, whereas upright frontal faces showed a consistent detection advantage in arrays and scenes. Experiment 4 examined which facial characteristics drive this detection advantage by rotating either the internal or external facial features by 90° while the other features remained upright. Faces with rotated internal features were detected as efficiently as their intact frontal counterparts, whereas detection was impaired when external features were rotated. Finally, Experiment 5 applied Voronoi transformations to scenes to confirm that complexity of stimulus displays modulates the detection advantage for upright faces. These experiments demonstrate that context influences what can be learned about the face detection process. In complex visual arrays and natural scenes, detection proceeds more effectively when external facial features are preserved in an upright orientation. These findings are consistent with a cognitive detection template that focuses on general face-shape information.