VR Distance Judgments are Affected by the Amount of Pre-Experiment Blind Walking
Many studies have found people can accurately judge distances in the real world while they underestimate distances in virtual reality (VR). This discrepancy negatively impacts some VR applications. Direct blind walking is a popular method of measuring distance judgments where participants view a target and then walk to it while blindfolded. To ensure that participants are comfortable with blindfolded walking, researchers often require participants to practice blind walking beforehand. We call this practice ”pre-experiment blind walking” (PEBW). Few studies report details about their PEBW procedure, and little research has been conducted on how PEBW might affect subsequent distance judgments. This between-participant study varied the amount of the PEBW and had participants perform distance judgments in VR. The results show that a longer PEBW causes less distance underestimation. This work demonstrates the importance of clearly reporting PEBW procedures and suggests that a consistent procedure may be necessary to reliably compare direct blind walking research studies.
- Conference Article
39
- 10.1109/vr.2019.8798095
- Mar 1, 2019
Augmented reality (AR) technologies have the potential to provide individuals with unique training and visualizations, but the effectiveness of these applications may be influenced by users' perceptions of the distance to AR objects. Perceived distances to AR objects may be biased if these objects do not appear to make contact with the ground plane. The current work compared distance judgments of AR targets presented on the ground versus off the ground when no additional AR depth cues, such as shadows, were available to denote ground contact. We predicted that without additional information for height off the ground, observers would perceive the off-ground objects as placed on the ground, but at farther distances. Furthermore, this bias should be exaggerated when targets were viewed with one eye rather than two. In our experiment, participants judged the absolute egocentric distance to various cubes presented on or off the ground with an action-based measure, blind walking. We found that observers walked farther for off-ground AR objects and that this effect was exaggerated when participants viewed off-ground objects with monocular vision compared to binocular vision. However, we also found that the restriction of binocular cues influenced participants' distance judgments for on-ground AR objects. Our results suggest that distances to off-ground AR objects are perceived differently than on-ground AR objects and that the elimination of binocular cues further influences how users perceive these distances.
- Conference Article
2
- 10.1145/1179622.1179841
- Jan 1, 2006
In recent work [4,2], we have discovered that people are able to make surprisingly accurate judgments about egocentric distances in an immersive virtual environment (IVE) in the special case that the IVE represents a fidelity model of the same physical space that the user is actually occupying, and the user has been able to unambiguously verify this by viewing the real space prior to donning the display upon which the corresponding virtual environment is presented. Through followup experiments in multiple locales, we have verified that the key factor enabling this distance perception accuracy is fact of co-location, rather than any particular characteristics of the physical environment [1].One possible interpretation of these intriguing results is that observers are better enabled to make accurate judgments of egocentric distance in an IVE when they are when they are cognitively 'immersed' or 'present' in the IVE -- i.e. when they readily accept the virtual environment as being 'equivalent' to the real world and are therefore prepared to act in the virtual world in the same way that they act in the real world [3]. However, another interpretation is also possible: it could be that people are able to make accurate judgments of egocentric distances in a virtual environment when they know that it exactly corresponds to a recently viewed real environment because they are able to form a metrically accurate mental model of the spatial structure of the real environment from their brief exposure to it, so that when they are subsequently presented with the virtual environment they simply calibrate their mental model of distances in the IVE to be consistent with their remembered model of the corresponding real environment.In order to disambiguate between the 'presence' hypothesis and the 'spatial memory' hypothesis, we conducted the following study. Using a between-subjects design, we asked observers to make judgments of egocentric distance via blind walking in a real room and in one of three different virtual models, each of which was described to the participants as representing a high fidelity virtual model of that same However, only one of the virtual models was actually an identical match in size to the real room. Nine of our 23 participants viewed a virtual model in which each of the walls had been surreptitiously moved 3 ft inward towards the center of the room, and another nine viewed a virtual model in which each of the walls had been surreptitiously moved 3.75 ft outwards from the center of the room. In each case, the textures were touched up in Photoshop to effect the changes without scaling anything. The remaining 5 participants viewed the same sized virtual model, replicating our earlier study. Acoustic cues were muffled for all participants by a radio playing static, and no training or feedback was given at any time.If the 'presence' hypothesis holds, we would either expect that the performance of participants in each group would be about the same, or that distances would be slightly underestimated in each of the artificially manipulated rooms. However, if the 'spatial memory' hypothesis holds, then we would expect that distance judgments would tend to deviate in opposite directions in the smaller and larger rooms, relative to the in the same room: participants who experience the smaller virtual room should overestimate distances in the virtual environment, and participants who experience the larger room should underestimate distances in the virtual environment, relative to in the real room.Figure 1 shows the three virtual environments used in this study, and figure 2 shows the average relative error in the distance judgments made by each participant in each virtual environment (vertical axis), compared to their error in the real environment (horizontal axis). We can see that most participants who experienced the accurately sized virtual room made distance judgments that were nearly equivalent in the real and virtual environments, consistent with our earlier findings [2]. However, many of the participants who experienced the smaller room model, and nearly all of the participants who experienced the larger room model, judged distances to be shorter, on average, in the virtual world than in the real world. These trends are statistically significant, and seem to support the 'presence' hypothesis more strongly than the 'remembered size' hypothesis.
- Conference Article
10
- 10.1145/1140491.1140534
- Jul 28, 2006
In recent work [4,2], we have discovered that people are able to make surprisingly accurate judgments about egocentric distances in an immersive virtual environment (IVE) in the special case that the IVE represents a fidelity model of the same physical space that the user is actually occupying, and the user has been able to unambiguously verify this by viewing the real space prior to donning the display upon which the corresponding virtual environment is presented. Through followup experiments in multiple locales, we have verified that the key factor enabling this distance perception accuracy is fact of co-location, rather than any particular characteristics of the physical environment [1].One possible interpretation of these intriguing results is that observers are better enabled to make accurate judgments of egocentric distance in an IVE when they are when they are cognitively 'immersed' or 'present' in the IVE -- i.e. when they readily accept the virtual environment as being 'equivalent' to the real world and are therefore prepared to act in the virtual world in the same way that they act in the real world [3]. However, another interpretation is also possible: it could be that people are able to make accurate judgments of egocentric distances in a virtual environment when they know that it exactly corresponds to a recently viewed real environment because they are able to form a metrically accurate mental model of the spatial structure of the real environment from their brief exposure to it, so that when they are subsequently presented with the virtual environment they simply calibrate their mental model of distances in the IVE to be consistent with their remembered model of the corresponding real environment.In order to disambiguate between the 'presence' hypothesis and the 'spatial memory' hypothesis, we conducted the following study. Using a between-subjects design, we asked observers to make judgments of egocentric distance via blind walking in a real room and in one of three different virtual models, each of which was described to the participants as representing a high fidelity virtual model of that same However, only one of the virtual models was actually an identical match in size to the real room. Nine of our 23 participants viewed a virtual model in which each of the walls had been surreptitiously moved 3 ft inward towards the center of the room, and another nine viewed a virtual model in which each of the walls had been surreptitiously moved 3.75 ft outwards from the center of the room. In each case, the textures were touched up in Photoshop to effect the changes without scaling anything. The remaining 5 participants viewed the same sized virtual model, replicating our earlier study. Acoustic cues were muffled for all participants by a radio playing static, and no training or feedback was given at any time.If the 'presence' hypothesis holds, we would either expect that the performance of participants in each group would be about the same, or that distances would be slightly underestimated in each of the artificially manipulated rooms. However, if the 'spatial memory' hypothesis holds, then we would expect that distance judgments would tend to deviate in opposite directions in the smaller and larger rooms, relative to the in the same room: participants who experience the smaller virtual room should overestimate distances in the virtual environment, and participants who experience the larger room should underestimate distances in the virtual environment, relative to in the real room.Figure 1 shows the three virtual environments used in this study, and figure 2 shows the average relative error in the distance judgments made by each participant in each virtual environment (vertical axis), compared to their error in the real environment (horizontal axis). We can see that most participants who experienced the accurately sized virtual room made distance judgments that were nearly equivalent in the real and virtual environments, consistent with our earlier findings [2]. However, many of the participants who experienced the smaller room model, and nearly all of the participants who experienced the larger room model, judged distances to be shorter, on average, in the virtual world than in the real world. These trends are statistically significant, and seem to support the 'presence' hypothesis more strongly than the 'remembered size' hypothesis.
- Research Article
97
- 10.1109/tvcg.2013.37
- Apr 1, 2013
- IEEE Transactions on Visualization and Computer Graphics
The following series of experiments explore the effect of static peripheral stimulation on the perception of distance and spatial scale in a typical head-mounted virtual environment. It was found that applying constant white light in an observer's far periphery enabled the observer to more accurately judge distances using blind walking. An effect of similar magnitude was also found when observers estimated the size of a virtual space using a visual scale task. The presence of the effect across multiple psychophysical tasks provided confidence that a perceptual change was, in fact, being invoked by the addition of the peripheral stimulation. These results were also compared to observer performance in a very large field of view virtual environment and in the real world. The subsequent findings raise the possibility that distance judgments in virtual environments might be considerably more similar to those in the real world than previous work has suggested.
- Research Article
29
- 10.1145/2325722.2325727
- Jul 1, 2012
- ACM Transactions on Applied Perception
Numerous studies report that people underestimate egocentric distances in Head-Mounted Display (HMD) virtual environments compared to real environments as measured by direct blind walking. Geometric minification, or rendering graphics with a larger field of view than the display's field of view, has been shown to eliminate this underestimation in a virtual hallway environment [Kuhl et al. 2006, 2009]. This study demonstrates that minification affects blind walking in a sparse classroom and does not influence verbal reports of distance. Since verbal reports of distance have been reported to be compressed in real environments, we speculate that minification in an HMD replicates peoples' real-world blind walking and verbal report distance judgments. We also demonstrate a new method for quantifying any unintentional miscalibration in our experiments. This process involves using the HMD in an augmented reality configuration and having each participant indicate where the targets and horizon appeared after each experiment. More work is necessary to understand how and why minification changes verbal- and walking-based egocentric distance judgments differently.
- Research Article
108
- 10.3758/app.71.6.1284
- Aug 1, 2009
- Attention, Perception, & Psychophysics
In immersive virtual environments, judgments of perceived egocentric distance are significantly underestimated, as compared with accurate performance in the real world. Two experiments assessed the influence of graphics quality on two distinct estimates of distance, a visually directed walking task and verbal reports. Experiment 1 demonstrated a similar underestimation of distances walked to previously viewed targets in both low- and high-quality virtual classrooms. In Experiment 2, participants' verbal judgments underestimated target distances in both graphics quality environments but were more accurate in the high-quality environment, consistent with the subjective impression that high-quality environments seem larger. Contrary to previous results, we suggest that quality of graphics does influence judgments of distance, but only for verbal reports. This behavioral dissociation has implications beyond the context of virtual environments and may reflect a differential use of cues and context for verbal reports and visually directed walking.
- Conference Article
12
- 10.1109/vr50410.2021.00056
- Mar 1, 2021
Understanding the extent to which, and conditions under which, scene detail affects spatial perception accuracy can inform the responsible use of sketch-like rendering styles in applications such as immersive architectural design walkthroughs using 3D concept drawings. This paper reports the results of an experiment that provides important new insight into this question using a custom-built, portable video-see-through (VST) conversion of an optical-see-through head-mounted display (HMD). Participants made egocentric distance judgments by blind walking to the perceived location of a real physical target in a real-world outdoor environment under three different conditions of HMD-mediated scene detail reduction: full detail (raw camera view), partial detail (Sobel-filtered camera view), and no detail (complete background subtraction), and in a control condition of unmediated real world viewing through the same HMD. Despite the significant differences in participants' ratings of visual and experiential realism between the three different video-see-through rendering conditions, we found no significant difference in the distances walked between these conditions. Consistent with prior findings, participants underestimated distances to a significantly greater extent in each of the three VST conditions than in the real world condition. The lack of any clear penalty to task performance accuracy not only from the removal of scene detail, but also from the removal of all contextual cues to the target location, suggests that participants may be relying nearly exclusively on context - independent information such as angular declination when performing the blind-walking task. This observation highlights the limitations in using blind walking to the perceived location of a target on the ground to make inferences about people's understanding of the 3D space of the virtual environment surrounding the target. For applications like immersive architectural design, where we seek to verify the equivalence of the 3D spatial understanding derived from virtual immersion and real world experience, additional measures of spatial understanding should be considered.
- Research Article
44
- 10.1016/j.displa.2013.01.001
- Feb 9, 2013
- Displays
Egocentric distance perception in large screen immersive displays
- Conference Article
13
- 10.1145/1012551.1012584
- Aug 7, 2004
Numerous studies have reported evidence of a compression of egocentric distance perception in immersive virtual environments (IVEs). Motivated by the long-term goal of exploring the potential of IVEs for facilitating the process of conceptual design in architecture, we set out to investigate possible methods for facilitating a more accurate perception of egocentric distance in these environments. In this poster we describe the results of our recent experiment comparing distance perception, as indicated by direct blind walking, in a real environment vs. a high fidelity virtual model of the same environment, presented in an nVisor SX head-mounted display (1280x1024 resolution; 60° diagonal monocular field of view; 100% stereo overlap) tracked at 500Hz over a 24° × 24' area. The two most important elements in our present experiment were: to remove the possibility of cognitive dissonance associated with having the presented virtual environment be different from the real environment, and to see whether providing users with short-range haptic feedback about the presence, size, and spatial location of a real object in the virtual environment, in combination with allowing them a moderate amount of time to experience the presented virtual environment in the context of performing an engaging and reasonably enjoyable task, could together improve the users' ability to make accurate judgments of egocentric distance in the virtual environment. Using a within-subjects experimental design, we had 7 naive participants indicate their perception of three different distance intervals (10', 20' and 30', interleaved) marked by tape at various locations on the floor in our lab (figure 1a), and in a high fidelity virtual reconstruction of our lab (figure 1b), over 4 conditions: real-world (baseline), co-located virtual-environment (baseline), virtual-environment after ten minutes of experience completing a virtual modeling task on a real table also represented in the virtual world, followed by a real-world post test. Prior to testing, participants completed 5 practice walks for each interval with feedback in a basement hallway.Our hypotheses were that we would, in our first and second conditions, replicate earlier findings of accurate blind walking in the real world [Rieser et al. 1990], but 'walking short' in the virtual world [Thompson et al. 03]. We expected that in the third and fourth conditions we would find evidence of adaptation to the compressed representation, followed by an after-effect (overshooting) in the real-world. However, our results (figure 3) did not exactly conform to these expectations. Specifically, we found a far more minor amount of distance compression in our baseline VE condition than has been found in previous studies and, consequent to this 'ceiling effect', we found only weak evidence of trends in the expected directions. We also found that people walked more slowly in the IVE. The main difference between our VE conditions and the VE conditions used in previous studies is that in our case the virtual environment was rendered in full 3D using photorealistic textures, and was co-located with the real world environment. Our training procedure was also different.There are several possible explanations for our results. They may suggest a greater tendency to 'walk normally' in an immersive virtual environment when it is clearly understood to be a faithful representation of an actual, co-located, real environment. They may also reflect an effect of the greater availability in our IVE, relative to the IVEs used in previous studies, of rich optical flow cues in a stimulus that faithfully represents the spectrum of image spatial frequencies existant in the real world. It is also possible that our results show some effects from the training, Further studies are planned to explore each of these possibilities.
- Conference Article
35
- 10.1145/2804408.2804427
- Sep 13, 2015
Distance perception is important for many virtual reality applications, and numerous studies have found underestimated egocentric distances in head-mounted display (HMD) based virtual environments. Applying minification to imagery displayed in HMDs is a method that can reduce or eliminate the underestimation [Kuhl et al. 2009; Zhang et al. 2012]. In a previous study, we measured distance judgments with direct blind walking through an Oculus Rift DK1 HMD and found that participants judged distance accurately in a calibrated condition, and minification caused subjects to overestimate distances [Li et al. 2014]. This article describes two experiments built on the previous study to examine distance judgments and minification with the Oculus Rift DK2 HMD (Experiment 1), and in the real world with a simulated HMD (Experiment 2). From the results, we found statistically significant distance underestimation with the DK2, but the judgments were more accurate than results typically reported in HMD studies. In addition, we discovered that participants made similar distance judgments with the DK2 and the simulated HMD. Finally, we found for the first time that minification had a similar impact on distance judgments in both virtual and real-world environments.
- Conference Article
15
- 10.1145/2931002.2931013
- Jul 22, 2016
Numerous studies have reported underestimated egocentric distances in virtual environments through head-mounted displays (HMDs). However, it has been found that distance judgments made through Oculus Rift HMDs are much less compressed, and their relatively high device field of view (FOV) may play an important role. Some studies showed that applying constant white light in viewers' peripheral vision improved their distance judgments through HMDs. In this study, we examine the effects of the device FOV and the peripheral vision by performing a blind walking experiment through an Oculus Rift DK2 HMD with three different conditions. For the BlackFrame condition, we rendered a rectangular black frame to reduce the device field of view of the DK2 HMD to match an NVIS nVisor ST60 HMD. In the WhiteFrame and GreyFrame conditions, we changed the frame color to solid white and middle grey. From the results, we found that the distance judgments made through the black frame were significantly underestimated relative to the WhiteFrame condition. However, no significant differences were observed between the WhiteFrame and GreyFrame conditions. This result provides evidence that the device FOV and peripheral light could influence distance judgments in HMDs, and the degree of influence might not change proportionally with respect to the peripheral light brightness.
- Conference Article
34
- 10.1145/2628257.2628273
- Aug 8, 2014
Distance perception is a crucial component for many virtual reality applications, and numerous studies have shown that egocentric distances are judged to be compressed in head-mounted display (HMD) systems. Geometric minification, a technique where the graphics are rendered with a field of view that larger than the HMD's field of view, is one known method of eliminating the distance compression [Kuhl et al. 2009; Zhang et al. 2012]. This study uses direct blind walking to determine how minification might impact distance judgments in the Oculus Rift HMD which has a significantly larger FOV than previous minification studies. Our results show that people were able to make accurate distance judgments in a calibrated condition and that geometric minification causes people to overestimate distances. Since this study shows that minification can impact wide FOV displays such as the Oculus, we discuss how it may be necessary to use calibration techniques which are more thorough than those described in this paper.
- Research Article
12
- 10.3389/fpsyg.2022.1061917
- Jan 11, 2023
- Frontiers in Psychology
Egocentric distance perception has been widely concerned by researchers in the field of spatial perception due to its significance in daily life. The frame of perception involves the perceived distance from an observer to an object. Over the years, researchers have been searching for an optimal way to measure the perceived distance and their contribution constitutes a critical aspect of the field. This paper summarizes the methodological findings and divides the measurement methods for egocentric distance perception into three categories according to the behavior types. The first is Perceptional Method, including successive equal-appearing intervals of distance judgment measurement, verbal report, and perceptual distance matching task. The second is Directed Action Method, including blind walking, blind-walking gesturing, blindfolded throwing, and blind rope pulling. The last one is Indirect Action Method, including triangulation-by-pointing and triangulation-by-walking. In the meantime, we summarize each method's procedure, core logic, scope of application, advantages, and disadvantages. In the end, we discuss the future concerns of egocentric distance perception.
- Conference Article
4
- 10.1145/3385959.3418447
- Oct 30, 2020
We perform an experiment on distance perception in a large-screen display immersive virtual environment. Large-screen displays typically make direct blind walking tasks impossible, despite them being a popular distance response measure in the real world and in head-mounted displays. We use a movable large-screen display to compare direct blind walking and indirect triangulated pointing with monoscopic viewing. We find that participants judged distances to be 89.4% ± 28.7% and 108.5% ± 44.9% of their actual distances in the direct blind walking and triangulated pointing conditions, respectively. However, we find no statistically significant difference between these approaches. This work adds to the limited number of research studies on egocentric distance judgments with a large display wall for distances of 3-5 meters. It is the first, to our knowledge, to perform direct blind walking with a large display.
- Research Article
9
- 10.1109/tvcg.2024.3456165
- Nov 1, 2024
- IEEE transactions on visualization and computer graphics
Virtual Reality (VR) systems are widely used, and it is essential to know if spatial perception in virtual environments (VEs) is similar to reality. Research indicates that users tend to underestimate distances in VR. Prior work suggests that actual distance judgments in VR may not always match the users self-reported preference of where they think they most accurately estimated distances. However, no explicit investigation evaluated whether user preferences match actual performance in a spatial judgment task. We used blind walking to explore potential dissimilarities between actual distance estimates and user-selected preferences of visual complexities, VE conditions, and targets. Our findings show a gap between user preferences and actual performance when visual complexities were varied, which has implications for better visual perception understanding, VR applications design, and research in spatial perception, indicating the need to calibrate and align user preferences and true spatial perception abilities in VR.