Spatial memory studies often employ static images depicting a scene, an array of objects, or environmental features from one perspective and then following a perspective-shift-prompt memory either of the scene or objects within the scene. The current study investigated a previously reported systematic bias in spatial memory where, following a perspective shift from encoding to recall, participants indicated the location of an object farther to the direction of the shift. In Experiment 1, we aimed to replicate this bias by asking participants to encode the location of an object in a virtual room and then indicate it from memory following a perspective shift induced by camera translation and rotation. In Experiment 2, we decoupled the influence of camera translations and rotations and examined whether adding additional objects to the virtual room would reduce the bias. Overall, our results indicate that camera translations result in greater systematic bias than camera rotations. We propose that the accurate representation of camera translations requires more demanding mental computations than camera rotations, leading to greater uncertainty regarding the location of an object in memory. This uncertainty causes people to rely on an egocentric anchor, thereby giving rise to the systematic bias in the direction of camera translation.