Reevaluating the Gaze Cursor in Virtual Reality: A Comparative Analysis of Cursor Visibility, Confirmation Mechanisms, and Task Paradigms.

  • Abstract
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Cursors and how they are presented significantly influence user experience in both VR and non-VR environments by shaping how users interact with and perceive interfaces. In traditional interfaces, cursors serve as a fundamental component for translating human movement into digital interactions, enhancing interaction accuracy, efficiency, and experience. The design and visibility of cursors can affect users' ability to locate interactive elements and understand system feedback. In VR, cursor manipulation is more complex than in non-VR environments, as it can be controlled through hand, head, and gaze movements. With the arrival of the Apple Vision Pro, the use of gaze-controlled non-visible cursors has gained some prominence. However, there has been limited exploration of the effect of this type of cursor. This work presents a comprehensive study of the effects of cursor visibility (visible vs. invisible) in gaze-based interactions within VR environments. Through two user studies, we investigate how cursor visibility impacts user performance and experience across different confirmation mechanisms and tasks. The first study focuses on selection tasks, examining the influence of target width, movement amplitude, and three common confirmation methods (air tap, blinking, and dwell). The second study explores pursuit tasks, analyzing cursor effects under varying movement speeds. Our findings reveal that cursor visibility significantly affects both objective performance metrics and subjective user preferences, but these effects vary depending on the confirmation mechanism used and task type. We propose eight design implications based on our empirical results to guide the future development of gaze-based interfaces in VR. These insights highlight the importance of tailoring cursor metaphors to specific interaction tasks and provide practical guidance for researchers and developers in optimizing VR user interfaces.

Similar Papers
  • Research Article
  • Cite Count Icon 2
  • 10.1080/10447318.2024.2342085
How Do Users Regulate Interaction Behaviors While Performing a Drag-and-Drop Task in a Virtual Reality Environment?
  • Apr 20, 2024
  • International Journal of Human–Computer Interaction
  • Min Chul Cha + 4 more

In virtual reality (VR) environments, users interact with objects using mid-air gestures; drag-and-drop (DND) is one of the most frequently performed tasks. However, few studies have considered the characteristics of VR and the interaction behavior of DND. This study aimed to investigate the interaction behavior of DND in a VR environment using the Oculus Quest 2 system by controlling the target width, movement amplitude, and movement direction. A DND task in VR has three phases: acceleration, deceleration, and correction. We observed that the target width, movement amplitude, and movement direction had a significant effect on the three phases of DND behavior. The effects were different for each behavioral phase, and an in-depth interaction analysis was conducted through the segmentation of behavior and consideration of vertical movement. These findings can contribute to the evaluation of work performance and interaction correction techniques in VR environments.

  • Research Article
  • Cite Count Icon 151
  • 10.1371/journal.pone.0191846
The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment.
  • Feb 1, 2018
  • PLOS ONE
  • Natalia Cooper + 5 more

Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as ‘presence’, when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user’s overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.

  • Research Article
  • Cite Count Icon 695
  • 10.1016/j.apergo.2017.12.016
Virtual reality sickness questionnaire (VRSQ): Motion sickness measurement index in a virtual reality environment
  • Jan 16, 2018
  • Applied Ergonomics
  • Hyun K Kim + 3 more

Virtual reality sickness questionnaire (VRSQ): Motion sickness measurement index in a virtual reality environment

  • Conference Article
  • 10.54941/ahfe100976
Investigating the effect of targets’ spatial distribution on the performance of gesture interaction in virtual reality environment
  • Jan 1, 2022
  • Kai Chen + 2 more

The primary task of human-computer interaction is to point to graphical elements. Selecting the targets precisely is an important part of interactive tasks such as virtual simulation and virtual assembly. In actual application conditions, especially in virtual reality(VR) environment, how to accurately and effectively select target objects has already been a popular research topic in recent years. As one of the most crucial interaction technologies in human-computer interaction, gesture interaction is widely used in VR environment. Based on the gesture interaction with ray-casting feedback, this paper conducted a multi-factor experiment in VR environment to explore the effects on the performance of target selection tasks under different depths and different perspectives. Three depth levels of 1 m, 2.5 m, and 6 m were set in the experiment. The positions of 9 circular target objects were determined at 20° intervals in the horizontal and vertical viewing angle, and the vertical plane in front of the participants was divided into nine areas. The size of the target at different depth levels remains the same. Participants need to click on circular target objects in different viewing angle areas at different depth levels. We used the HTC-Vive head-mounted display and Noitom HI5 gloves as the experimental equipment and recruited 14 participants with normal vision in the experiment. The results show that the higher the depth level, the higher the pointing accuracy of gesture interaction with ray-casting feedback, and the closer the target object is to the visual center, the higher the pointing accuracy. In addition, the results show that the spatial distribution of the ray-casting interactive pointing deviation, the participants’ actual point position is lower than the target point position, and the deviation of the actual point position on the right side of the viewing angle is significantly higher than that on the left and middle positions. The research results have reference value for the spatial distribution of targets for gesture interactive tasks in VR environment.

  • Research Article
  • Cite Count Icon 1
  • 10.1093/cdn/nzaa059_026
The Representation of Food-Related Environments in Virtual Reality
  • May 29, 2020
  • Current Developments in Nutrition
  • James Hollis + 1 more

The Representation of Food-Related Environments in Virtual Reality

  • Research Article
  • Cite Count Icon 1
  • 10.3182/20140824-6-za-1003.01480
Comparison of Mental and Theoretical Evaluations of Remotely Controlled Mobile Manipulators
  • Jan 1, 2014
  • IFAC Proceedings Volumes
  • Cong D Pham + 2 more

Comparison of Mental and Theoretical Evaluations of Remotely Controlled Mobile Manipulators

  • Research Article
  • Cite Count Icon 133
  • 10.1016/j.juro.2017.07.081
Development and Validation of Objective Performance Metrics for Robot-Assisted Radical Prostatectomy: A Pilot Study.
  • Jul 29, 2017
  • Journal of Urology
  • Andrew J Hung + 5 more

Development and Validation of Objective Performance Metrics for Robot-Assisted Radical Prostatectomy: A Pilot Study.

  • Book Chapter
  • Cite Count Icon 1
  • 10.3233/faia230086
Electrifying Obstacle Avoidance: Enhancing Teleoperation of Robots with EMS-Assisted Obstacle Avoidance
  • Jun 22, 2023
  • Ambika Shahu + 4 more

We investigate how the use of haptic feedback through electrical muscle stimulation (EMS) can improve collision-avoidance in a robot teleoperation scenario. Background: Collision-free robot teleoperation requires extensive situation awareness by the operator. This is difficult to achieve purely visually when obstacles can exist outside of the robot’s field of view. Therefore, feedback from other sensory channels can be beneficial. Method: We compare feedback modalities in the form of auditory, haptic and bi-modal feedback, notifying users about incoming obstacles outside their field of view, and moving their arms in the direction to avoid the obstacle. We evaluate the different feedback modalities alongside a unimodal visual feedback baseline in a user study (N=9), where participants are controlling a robotic arm in a virtual reality environment. We measure objective performance metrics in terms of the number of collisions and errors, as well as subjective user feedback using the NASA-TLX and the short version of the User Experience Questionnaire. Findings: Unimodal EMS and bi-modal feedback outperformed the baseline and unimodal auditory feedback when it comes to hedonic user experience (p<.001). EMS outperformed the baseline with regards to pragmatic user experience (p=.018). We did not detect significant differences in the performance metrics (collisions and errors). We measured a strong learning effect when investigating the collision count and time. Key insights: The use of EMS is promising for this task. Two of the nine participants reported to experience some level of discomfort. The modality is best utilized for nudging rather than extended movement.

  • Research Article
  • Cite Count Icon 73
  • 10.1145/3355089.3356544
Modeling endpoint distribution of pointing selection tasks in virtual reality environments
  • Nov 8, 2019
  • ACM Transactions on Graphics
  • Difeng Yu + 4 more

Understanding the endpoint distribution of pointing selection tasks can reveal the underlying patterns on how users tend to acquire a target, which is one of the most essential and pervasive tasks in interactive systems. It could further aid designers to create new graphical user interfaces and interaction techniques that are optimized for accuracy, efficiency, and ease of use. Previous research has explored the modeling of endpoint distribution outside of virtual reality (VR) systems that have shown to be useful in predicting selection accuracy and guide the design of new interactive techniques. This work aims at developing an endpoint distribution of selection tasks for VR systems which has resulted in EDModel , a novel model that can be used to predict endpoint distribution of pointing selection tasks in VR environments. The development of EDModel is based on two users studies that have explored how factors such as target size, movement amplitude, and target depth affect the endpoint distribution. The model is built from the collected data and its generalizability is subsequently tested in complex scenarios with more relaxed conditions. Three applications of EDModel inspired by previous research are evaluated to show the broad applicability and usefulness of the model: correcting the bias in Fitts's law, predicting selection accuracy, and enhancing pointing selection techniques. Overall, EDModel can achieve high prediction accuracy and can be adapted to different types of applications in VR.

  • Abstract
  • 10.1016/j.juro.2015.02.703
PD19-04 VALIDATION OF LAPAROSCOPIC TRAINING CURRICULUM: THE BASIC LAPAROSCOPIC UROLOGIC SKILLS (BLUS) INITIATIVE
  • Mar 31, 2015
  • The Journal of Urology
  • Timothy Kowalewski + 1 more

PD19-04 VALIDATION OF LAPAROSCOPIC TRAINING CURRICULUM: THE BASIC LAPAROSCOPIC UROLOGIC SKILLS (BLUS) INITIATIVE

  • Research Article
  • Cite Count Icon 13
  • 10.1016/j.euf.2022.09.017
International Expert Consensus on Metric-based Characterization of Robot-assisted Partial Nephrectomy
  • Oct 10, 2022
  • European Urology Focus
  • Rui Farinha + 7 more

International Expert Consensus on Metric-based Characterization of Robot-assisted Partial Nephrectomy

  • Research Article
  • 10.1097/eja.0000000000001821
Development of objective performance metrics for ultrasound-guided internal jugular vein cannulation on behalf of the College of Anaesthesiologists of Ireland and observation of scores amongst novice and experienced operators.
  • Mar 28, 2023
  • European Journal of Anaesthesiology
  • Dorothy Breen + 6 more

Ultrasound-guided, internal jugular venous (IJV) cannulation is a core technical skill for anaesthesiologists and intensivists. At a modified Delphi panel meeting, to define and reach consensus on a set of objective ultrasound-guided IJV cannulation performance metrics on behalf of the College of Anaesthesiologists of Ireland (CAI). To use these metrics to objectively score video recordings of novice and experienced anaesthesiologists. An observational study. CAI, March to June 2016 and four CAI training hospitals, November 2016 to July 2019. Metric development group: two CAI national directors of postgraduate training (specialist anaesthesiolgists), a behavioural scientist, a specialist intensivist and a senior CAI trainee. Scoring by two blinded assessors of video recordings of novice ( n = 11) and experienced anaesthesiologists ( n = 15) ultrasound-guided IJV cannulations. A set of agreed CAI objective performance metrics, that is, steps, errors, and critical errors characterising ultrasound-guided IJV cannulation. The difference in performance scores between novice and experienced anaesthesiologists as determined by skill level defined as being below or above the median total error score (errors plus critical errors): that is, low error (LoErr) and high error (HiErr), respectively. The study identified 47 steps, 18 errors and 13 critical errors through six phases.Variability was observed in the range of total error scores for both novice (1 to 3) and experienced (0 to 4.5) anaesthesiologists. This resulted in two further statistically different subgroups (LoErr and HiErr) for both novice ( P = 0.011) and experienced practitioners ( P < 0.000). The LoErr-experienced group performed the best in relation to steps, errors and total errors. Critical errors were only observed in the experienced group. A set of valid, reliable objective performance metrics has been developed for ultrasound-guided IJV cannulation. Considerable skill variability underlines the need to develop a CAI simulation-training programme using these metrics.

  • Research Article
  • 10.11124/jbies-21-00383
Objective performance metrics in human robotic neuroendovascular interventions: a scoping review protocol.
  • Nov 1, 2022
  • JBI evidence synthesis
  • Peter J Gariscsak + 4 more

The objective of this scoping review is to review the available information on objective performance metrics used during robotic neuroendovascular intervention procedures on humans. Robotic neuroendovascular intervention is defined as any endovascular procedure within the vasculature of the central nervous system with the assistance of a robotic system for diagnostic or therapeutic procedures. Robotic systems are described as a 2-component system consisting of a patient-side mechanical robot, and a separate operator control station. Robotic neuroendovascular intervention is a growing field and there is a need to establish objective performance metrics for furthering evidence-based reporting of the literature. This scoping review will consider all studies involving humans that utilize robotic neuroendovascular intervention. We will consider all types of studies, reports, and reviews as well as gray literature. Studies will be included if they describe the use of an objective performance metric during robotic neuroendovascular intervention. This review is not limited to a particular country or health care system, and will consider all study designs, regardless of their rigor or language. Utilizing a 3-step framework as a guide, we will perform a systematic search in Embase, Cochrane Library, and MEDLINE. Available literature from inception to the present will be considered. Studies will be independently screened according to the inclusion criteria by 2 reviewers based on title, abstract, and full text. Data will be extracted, sorted, and presented in both a narrative summary as well as table and diagram based on the objective of the scoping review.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.3389/fpsyg.2023.1129677
Evaluating gaze behaviors as pre-touch reactions for virtual agents.
  • Mar 6, 2023
  • Frontiers in Psychology
  • Dario Alfonso Cuello Mejía + 3 more

Reaction behaviors by human-looking agents to nonverbal communication cues significantly affect how they are perceived as well as how they directly affect interactions. Some studies have evaluated such reactions toward several interactions, although few approached before-touch situations and how the agent's reaction is perceived. Specifically, it has not been considered how pre-touch reactions impact the interaction, the influence of gaze behavior in a before-touch situation context and how it can condition the participant's perception and preferences in the interaction. The present study investigated the factors that define pre-touch reactions in a humanoid avatar in a virtual reality environment and how they influence people's perceptions of the avatars. We performed two experiments to assess the differences between approaches from inside and outside the field of view (FoV) and implemented four different gaze behaviors: face-looking, hand-looking, face-then-hand looking and hand-then-face looking behaviors. We also evaluated the participants' preferences based on the perceived human-likeness, naturalness, and likeability. In Experiment 1, we evaluated the number of steps in gaze behavior, the order of the gaze-steps and the gender; Experiment 2 evaluated the number and order of the gaze-steps. A two-step gaze behavior was perceived as more human and more natural from both inside and outside the field of view and that a face-first looking behavior when defining only a one-step gaze movement was preferable to hand-first looking behavior from inside the field of view. Regarding the location from where the approach was performed, our results show that a relatively complex gaze movement, including a face-looking behavior, is fundamental for improving the perceptions of agents in before-touch situations. The inclusion of gaze behavior as part of a possible touch interaction is helpful for developing more responsive avatars and gives another communication channel for increasing the immersion and enhance the experience in Virtual Reality environments, extending the frontiers of haptic interaction and complementing the already studied nonverbal communication cues.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 8
  • 10.3390/ijgi11020127
Toward Gaze-Based Map Interactions: Determining the Dwell Time and Buffer Size for the Gaze-Based Selection of Map Features
  • Feb 10, 2022
  • ISPRS International Journal of Geo-Information
  • Hua Liao + 3 more

The modes of interaction (e.g., mouse and touch) between maps and users affect the effectiveness and efficiency of transmitting cartographic information. Recent advances in eye tracking technology have made eye trackers lighter, cheaper and more accurate, broadening the potential to interact with maps via gaze. In this study, we focused exclusively on using gaze to choose map features (i.e., points, polylines and polygons) via the select operation, a fundamental action preceding other operations in map interactions. We adopted an approach based on the dwell time and buffer size to address the low spatial accuracy and Midas touch problem in gaze-based interactions and to determine the most suitable dwell time and buffer size for the gaze-based selection of map features. We conducted an experiment in which 38 participants completed a series of map feature selection tasks via gaze. We compared the participants’ performance (efficiency and accuracy) between different combinations of dwell times (200 ms, 600 ms and 1000 ms) and buffer sizes (point: 1°, 1.5°, and 2°; polyline: 0.5°, 0.7° and 1°). The results confirmed that a larger buffer size raised efficiency but reduced accuracy, whereas a longer dwell time lowered efficiency but enhanced accuracy. Specifically, we found that a 600 ms dwell time was more efficient in selecting map features than 200 ms and 1000 ms but was less accurate than 1000 ms. However, 600 ms was considered to be more appropriate than 1000 ms because a longer dwell time has a higher risk of causing visual fatigue. Therefore, 600 ms supports a better balance between accuracy and efficiency. Additionally, we found that buffer sizes of 1.5° and 0.7° were more efficient and more accurate than other sizes for selecting points and polylines, respectively. Our results provide important empirical evidence for choosing the most appropriate dwell times and buffer sizes for gaze-based map interactions.

More from: IEEE transactions on visualization and computer graphics
  • New
  • Research Article
  • 10.1109/tvcg.2025.3628181
Untangling Rhetoric, Pathos, and Aesthetics in Data Visualization.
  • Nov 7, 2025
  • IEEE transactions on visualization and computer graphics
  • Verena Prantl + 2 more

  • Research Article
  • 10.1109/tvcg.2025.3616763
Measurement of Visitor Behavioral Engagement in Heritage Informal Learning Environments Using Head-Mounted Displays.
  • Nov 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Shuyu Luo + 4 more

  • Research Article
  • 10.1109/tvcg.2025.3616756
Selection at a Distance Through a Large Transparent Touch Screen.
  • Nov 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Sebastian Rigling + 4 more

  • Research Article
  • 10.1109/tvcg.2025.3610275
IEEE ISMAR 2025 Introducing the Special Issue
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics
  • Han-Wei Shen + 2 more

  • Research Article
  • 10.1109/tvcg.2025.3616842
Detecting Visual Information Manipulation Attacks in Augmented Reality: A Multimodal Semantic Reasoning Approach.
  • Nov 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Yanming Xiu + 1 more

  • Research Article
  • 10.1109/tvcg.2025.3610302
IEEE ISMAR 2025 Science &amp; Technology Program Committee Members for Journal Papers
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics

  • Research Article
  • 10.1109/tvcg.2025.3616749
HAT Swapping: Virtual Agents as Stand-Ins for Absent Human Instructors in Virtual Training.
  • Nov 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Jingjing Zhang + 8 more

  • Research Article
  • 10.1109/tvcg.2025.3616758
Viewpoint-Tolerant Depth Perception for Shared Extended Space Experience on Wall-Sized Display.
  • Nov 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Dooyoung Kim + 3 more

  • Research Article
  • 10.1109/tvcg.2025.3616751
SGSG: Stroke-Guided Scene Graph Generation.
  • Nov 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Qixiang Ma + 5 more

  • Research Article
  • 10.1109/tvcg.2025.3620888
IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE International Symposium on Mixed and Augmented Reality
  • Nov 1, 2025
  • IEEE Transactions on Visualization and Computer Graphics

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon