This study tested the effect of providing a vista view of a large-scale environment on spatial learning. During normal navigation, we can only perceive limited regions at a time. To learn the spatial layout, we would have to integrate information from multiple views. If an unoccluded view of the full environment was displayed (vista view), would this improve spatial learning? We tested this experimentally using VR in two pre-registered experiments. We created vista views of urban environments by compressing the height of most non-target buildings, allowing participants to see the whole space and configuration of target buildings from an egocentric perspective. In Experiment 1 (N = 32), the vista view was presented during travel to each target on learning trials. In Experiment 2 (N = 36), the vista view was presented for 10s in a preview period before each learning trial. After the learning phase, they were tested on judgments of relative direction (JRD) to measure survey knowledge, and wayfinding to measure route knowledge. We hypothesized that the vista views would improve survey knowledge by allowing participants to directly observe the locations of targets relative to each other and global landmarks, but there might be little or no benefit for route knowledge. Surprisingly, even though the vista view allowed participants to see the configuration of targets at once, there was no improvement in JRD or wayfinding performance compared to the normal condition where visibility is limited to local regions. Our results suggest that a street-level unoccluded view may have limited benefit for learning a large-scale environment through navigation.
Read full abstract