Abstract

Two experiments investigated the mental representations of objects' location in a virtual nested environment. In Experiment 1, participants learned the locations of objects (buildings or related accessories) in an exterior environment and then learned the locations of objects inside one of the centrally located buildings (interior environment). Participants completed judgments of relative direction in which the imagined heading was established by pairs of objects from the interior environment and the target was one of the objects in the exterior environment. Performance was best for the imagined heading and allocentric target direction parallel to the learning heading of the exterior environment, but the effect of allocentric target direction was only significant for the imagined headings aligned with the reference axes of both environments; in addition, performance was best along the front-back egocentric axis (parallel to the imagined heading). Experiment 2 used the same learning procedure. After learning, the viewpoint was moved from the exterior environment along a smooth path into a side entrance of the building/interior environment. There participants saw the array of interior objects in the orientation consistent with their movement (correct cue), the array of objects in an orientation inconsistent with their movement (misleading cue), or no array of objects (no cue), and then pointed to objects in the exterior environment. Pointing performance was best for the correct-cue condition. Collectively the results indicated that memories of nested spaces are segregated by spatial conceptual level, and that spatial relations between levels are specified in terms of the dominant reference directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call