Abstract
Spatial memory studies often employ static images depicting a scene, an array of objects, or environmental features from one perspective and then following a perspective-shift-prompt memory either of the scene or objects within the scene. The current study investigated a previously reported systematic bias in spatial memory where, following a perspective shift from encoding to recall, participants indicated the location of an object farther to the direction of the shift. In Experiment 1, we aimed to replicate this bias by asking participants to encode the location of an object in a virtual room and then indicate it from memory following a perspective shift induced by camera translation and rotation. In Experiment 2, we decoupled the influence of camera translations and rotations and examined whether adding additional objects to the virtual room would reduce the bias. Overall, our results indicate that camera translations result in greater systematic bias than camera rotations. We propose that the accurate representation of camera translations requires more demanding mental computations than camera rotations, leading to greater uncertainty regarding the location of an object in memory. This uncertainty causes people to rely on an egocentric anchor, thereby giving rise to the systematic bias in the direction of camera translation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.