Understanding another person’s visual perceptions is known as visuospatial perspective taking, with evidence to date demonstrating it is delineated across two levels, depending on how different that perspective is. Some strategies for visuospatial perspective taking have also been found to involve embodied cognition. However, the current generalisation of these findings is limited due to experimental setup and the use of computer monitors as the interface for experimental tasks. Augmented reality interfaces could possibly extend on the generalisation of these findings by situating virtual stimuli in the real environment, thus providing a higher degree of ecological validity and experimental standardisation. This study aimed to observe visuospatial perspective taking in augmented reality. This was achieved in participant experiments (N=24) using the Left-Right behavioural speeded decision task, which requires participants to discriminate between target objects relative to the perspective of an avatar. Angular disparity and posture congruence between the avatar and participant were manipulated between each trial to delineate between the two levels of visuospatial perspective taking and understand its potentially embodied nature. Although generalised linear mixed modeling indicated that angular disparity increased task difficulty, unexpectedly findings on posture congruence were less clear. Together, this suggests that visuospatial perspective taking in this study can be delineated across two levels. Further implications for embodied cognition and empathy research are discussed.
Read full abstract