Speed-of-Light VR for Blind People: Conveying the Location of Arm-Reach Targets
Interacting with close-range objects in Virtual Reality (VR) is often prompted by visual cues, making it hard for visually impaired people to perceive their location and interact with them. To study how to enable blind users to locate and interact with close virtual objects, we adapted the arcade Speed-of-Light game as a blind-accessible VR application. We implemented three techniques: 1) Speech Feedback (e.g., “Top Right”), 2) Sonification, and 3) 2D Grid Position (e.g., “A3” for column and row); and conducted a user study with 15 blind participants aiming to provide insights into the design of non-visual techniques that convey information about targets at arm-reach. Speech Feedback was the most intuitive overall but verbose and the least flexible, while 2D Grid Position was found straightforward for regular spreadsheet users. Results also showed greater difficulty with Sonification, although it was valued by few participants who appreciated the challenge.
- Research Article
3
- 10.3724/sp.j.1041.2012.00040
- Jan 6, 2013
- Acta Psychologica Sinica
The conceptualization of the abstract concept—— "time" need to be grounded on a more concrete conceptual domain——"space". It has been demonstrated that this "left-past, right-future" representation of time is psychologically real, and the experience responsible for the representation of time is related to the exposure of participants to a left-to-right orthographic system. That is to say, the reading/writing directionality affects the representation of temporal sequences. There are two theories which try to explain this finding:the perceptual symbol theory and the propositional symbol theory. Perceptual symbol theory assures that the left-right mapping of time is perceptual, while propositional symbol theory assures that this mapping is linked to the abstract, amodal concept of time. The most substantial difference between these two theories is whether a perceptual facilitation effect with temporal words focuses on a specific modality. The present study aimed to test whether it could be accessed through auditory modalities. Experiment 1 explored the modality specificity of horizontal metaphoric representation of time by the blind people who carried out a time nature judgement task on auditorily presented words referring either to the past or the future. A 2×2×2 repeated measure design was adopted with independent variables of temporal reference (past / future), target location (left / right) and response location (left / right).The results showed horizontal metaphoric representation of time only was observed at the motoric level of the blind people. The blind people were faster responding to past words or sentences with their left hands and to future words or sentences with their right hands. These results indicated that spatial information which was used to represent time was perceptual. Contrary to the reading directionality of the blind people, the writing directionality of the blind people is right-to-left. The results showed that the directionality of metaphoric representation of time was not coherent with the writing directionality of the blind people. It suggested that the sensory-motor experience was related to the reading habits of the blind people. In Experiment 2, the sighted people were divided into two groups: the sighed group and the blindfolded group. The procedure of experiment 2 was the same as in experiment 1. A 2×2×2×2 mixed design was used. Two groups of participants differing in the spatial frame of reference showed different results. Only when the words were auditorily presented on the right side, did the blind-folded show the patterns of congruency between response side and temporal reference at the motoric level. This result further proved the positions of perceptual symbol theory. It is well known that the spatial cognition of the blind people is different from the sighted ones. So, a comprehensive analysis of data of the two experiments above was made to test whether the three groups had different horizontal metaphoric representation of time. A 3×2×2×2 mixed design was used. The results of the blind people and sighted people were similar. Neither of them was affected by the auditory spatial information in the horizontal metaphoric representation of time, which suggested that the the spatial cognitive ability in the blind's motoric modality compensated the loss of sight.
- Research Article
2
- 10.1109/tvcg.2025.3549847
- May 1, 2025
- IEEE transactions on visualization and computer graphics
Aiming tasks are common in VR, but are challenging to perform without vision. They require identifying a target's location and then precisely aiming and selecting it. In this paper, we explore how to support blind people in aiming tasks using a VR Archery scenario. We implemented three techniques: 1) Spatialized Audio, a baseline where the target emits a specific 3D sound to convey its location; 2) Target Confirmation, where the previous condition is augmented with secondary Beep sounds to indicate proximity to the target; and 3) Reticle-Target perspective, where the auditory feedback conveys the relation between the target and the user's aiming reticle. A study with 15 blind participants compared the three techniques under two scenarios: stationary and moving targets. Target Confirmation and Reticle-Target Perspective clearly outperformed Spatialized Audio, but user preferences were evenly split between these two techniques. We discuss how our findings may support the development of VR experiences that are more accessible and enjoyable to a broader range of users.
- Research Article
49
- 10.1177/0145482x8307700405
- Apr 1, 1983
- Journal of Visual Impairment & Blindness
Examines the spatial ability of sighted, blindfolded sighted, and congenitally blind subjects. They walked through an unfamiliar, large-scale space in which target locations could not be seen simultaneously and were then taken to each target location and asked the position of the other locations. Results indicate that past visual experience helps individuals to acquire spatial information from large-scale environments.
- Research Article
20
- 10.3758/app.72.1.23
- Jan 1, 2010
- Attention, Perception, & Psychophysics
Critical to low-vision navigation are the abilities to recover scale and update a 3-D representation of space. In order to investigate whether these abilities are present under low-vision conditions, we employed the triangulation task of eyes-closed indirect walking to previously viewed targets on the ground. This task requires that the observer continually update the location of the target without any further visual feedback of his/her movement or the target's location. Normally sighted participants were tested monocularly in a degraded vision condition and a normal vision condition on both indirect and direct walking to previously viewed targets. Surprisingly, we found no difference in walked distances between the degraded and normal vision conditions. Our results provide evidence for intact spatial updating even under severely degraded vision conditions, indicating that participants can recover scale and update a 3-D representation of space under simulated low vision.
- Research Article
- 10.1163/187847612x646767
- Jan 1, 2012
- Seeing and Perceiving
Early blind people compensate for their lack of vision by developing superior abilities in the remaining senses such as audition (Collignon et al., 2006; Gougoux et al., 2004; Wan et al., 2010). Previous studies reported supra-normal abilities in auditory spatial attention, particularly for the localization of peripheral stimuli in comparison with frontal stimuli (Lessard et al., 1998; Röder et al., 1999). However, it is unknown whether this specific supra-normal ability extends to the non-spatial attention domain. Here we compared the performance of early blind subjects and sighted controls, who were blindfolded, during an auditory non-spatial attention task: target detection among distractors according to tone frequency. We paid a special attention to the potential effect of the sound source location, comparing the accuracy and speed in target detection in the peripheral and frontal space. Blind subjects displayed shorter reaction times than sighted controls for both peripheral and frontal stimuli. Moreover, in the two groups of subjects, we observed an interaction effect between the target location and the distractors location: the target was detected faster when its location was different from the location of the distractors. However, this effect was attenuated in early blind subjects and even cancelled in the condition with frontal targets and peripheral distractors. We conclude that early blind people compensate for the lack of vision by enhancing their ability to process auditory information but also by changing the spatial distribution of their auditory attention resources.
- Research Article
135
- 10.1163/156856899x00030
- Jan 1, 1999
- Spatial Vision
Topographic characteristics of peripheral letter recognition were investigated using a sustained attention paradigm to clarify whether its deployment in the visual field is equally easy in all eight tested locations at 7.5 deg eccentricity. Target size (36 arcmin) was clearly above threshold, so that letters were easily recognized at long durations (> 500 ms). In the main experiment, they were displayed for an individually determined duration of 66 to 167 ms. Six of twelve normally sighted subjects were in their twenties, the others in their fifties. The target location was cued (1 s), and after 2.5-4 s delay, a target was displayed. The results provide strong evidence that performance depended significantly on location and subject. All spatial characteristics showed anisometry, and most showed vertical asymmetry of either sign. Performance was always best on the horizontal meridian. None of the results correlated with subject age. These findings also show that in disfavored locations, performance is limited by deploying attention there, not by holding it there. Consequently, in low vision rehabilitation after binocular central field loss, the choice of a preferred retinal locus for 'eccentric viewing' can be limited by an attentional factor that is unrelated to the eye disease.
- Book Chapter
5
- 10.1007/978-3-030-51517-1_36
- Jan 1, 2020
- The Impact of Digital Technologies on Public Health in Developed and Developing Countries
Navigation is an important human task that needs the human sense of vision. In this context, recent technologies developments provide technical assistance to support the visually impaired in their daily tasks and improve their quality of life. In this paper, we present a mobile assistive application called “GuiderMoi” that retrieves information about directions using color targets and identifies the next orientation for the visually impaired. In order to avoid the failure in detection and the inaccurate tracking caused by the mobile camera, the proposed method based on the CamShift algorithm aims to introduce better location and identification of color targets. Tests were conduct in natural indoor scene. The results depending on the distance and the angle of view, defined the accurate values to have a highest rate of target recognition. This work has perspectives for this such as implicating the augmented reality and the intelligent navigation based on machine learning and real-time processing.
- Conference Article
5
- 10.1109/icarm52023.2021.9536077
- Jul 3, 2021
In this paper, a novel guide robot for blind people based on an elastic rope and a force sensor is proposed, which overcomes the shortcomings of the existing guide robot which uses rigid stick to pull the blind people and brings uncomfortable guiding experience. The robot has the advantages of simple structure, low cost, light weight, foldability and portability. While walking, blind people can adjust the speed of the robot at any time to match his own speed, which can play a reliable and safe following effect. Experiments are carried out in an office and a larger factory. The results show that the feedback force and speed are always kept in a small and stable range during the process of the robot pulling the blind people, so the user experience is good. And blind people can be safely and reliably brought to the target location.
- Research Article
86
- 10.1007/s10514-016-9595-8
- Aug 11, 2016
- Autonomous Robots
Navigation in complex and unknown environments is a major challenge for elderly blind people. Unfortunately, conventional navigation aids such as white canes and guide dogs provide only limited assistance to blind people with walking impairments as they can hardly be combined with a walker, required for walking assistance. Additionally, such navigation aids are constrained to the local vicinity only. We believe that technologies developed in the field of robotics have the potential to assist blind people with walking disabilities in complex navigation tasks as they can provide information about obstacles and reason on both global and local aspects of the environment. The contribution of this article is a smart walker that navigates blind users safely by leveraging recent developments in robotics. Our walker can support the user in two ways, namely by providing information about the vicinity to avoid obstacles and by guiding the user to reach the designated target location. It includes vibro-tactile user interfaces and a controller that takes into account human motion behavior obtained from a user study. In extensive qualitative and quantitative experiments that also involved blind and age-matched participants we demonstrate that our smart walker safely navigates users with limited vision.
- Research Article
19
- 10.1007/s00221-017-5063-8
- Aug 12, 2017
- Experimental Brain Research
Monitoring one's safety during low vision navigation demands limited attentional resources which may impair spatial learning of the environment. In studies of younger adults, we have shown that these mobility monitoring demands can be alleviated, and spatial learning subsequently improved, via the presence of a physical guide during navigation. The present study extends work with younger adults to an older adult sample with simulated low vision. We test the effect of physical guidance on improving spatial learning as well as general age-related changes in navigation ability. Participants walked with and without a physical guide on novel real-world paths in an indoor environment and pointed to remembered target locations. They completed concurrent measures of cognitive load on the trials. Results demonstrate an improvement in learning under low vision conditions with a guide compared to walking without a guide. However, our measure of cognitive load did not vary between guidance conditions. We also conducted a cross-age comparison and found support for age-related declines in spatial learning generally and greater effects of physical guidance with increasing age.
- Research Article
7
- 10.1364/ao.385841
- Apr 21, 2020
- Applied Optics
This paper recalls one of the most critical problems for the area of computer vision, the automatic location of a single camera. Today, several robotic devices rely on technologies other than visual information to perform self-localization. An artificial optical system will significantly benefit from knowing its location within a three-dimensional world since this is a crucial step to approach other complex tasks. In this paper, we will show how to compute the position of the camera through an uncalibrated method making use of projective properties, the projection model of the camera, and some reference points. We introduce a simple yet powerful way to detect coded targets in photographic images. Then, we describe an uncalibrated approach used to identify the location of a camera in three-dimensional space. The experiments carried out confirm the validity of our proposal.
- Research Article
- 10.1007/s10339-025-01296-3
- Aug 22, 2025
- Cognitive Processing
Audiovisual integration occurs automatically and affects visual processing. This study aims to investigate whether temporally synchronized auditory signals enhance monocular signals during binocular observation. In Experiment 1, 16 participants performed a visual target localization task. A mirror stereoscope was used to present a rapid serial visual presentation (RSVP) stream of distractors to both eyes, with a visual target inserted in either both eyes, the dominant eye, or the non-dominant eye. Continuous low tones synchronized with distractors were paired with the target as either the same low tone (non-salience) or a high tone (salience). Detection facilitation rates by tone type were analyzed through multiple comparisons. Results showed a significant detection enhancement only when the target appeared in the non-dominant eye. In Experiment 2, involving 16 participants, a similar RSVP was presented, but with an orientation discrimination task for parafoveally presented texture stimuli comprising 17 vertical Gabor patches. The angle and proportion of tilted patches were manipulated simultaneously, and logistic regression was used to estimate orientation discrimination thresholds. Contrary to predictions, salient tones did not reduce the thresholds. These findings suggest that temporally synchronized auditory signals can selectively enhance the monocular processing of weaker visual signals (i.e., non-dominant eye signals) before binocular fusion, particularly for spatial localization. However, these effects did not extend to the identification of visual content (i.e., orientation) or stable visual signals (i.e., dominant or binocular). The results highlight the role of audiovisual integration in supporting unstable monocular signals and suggest potential applications in low vision training.