Waiting for the ultimate display: can decreased fidelity positively influence perceived realism?
The first virtual reality (VR) systems have hit the shelves, and 2017 may become the year where VR finally enters the homes of consumers in a big way. By allowing users to perceive and interact in a natural manner, VR offers the promise of realistic experiences of familiar, foreign, and fantastic virtual places and events. However, should we always opt for the highest degree of fidelity when striving to provide users with realistic experiences? In this position paper, we argue that when certain components of fidelity are limited, as they will be in relation to consumer VR, then maximizing the fidelity of other components may be detrimental to the perceived realism of the user. We present three cases supporting this hypothesis, and discuss the potential implications for researchers and developers relying on commercially available VR systems.
- Research Article
2
- 10.1096/fasebj.2020.34.s1.07235
- Apr 1, 2020
- The FASEB Journal
Significant advancements in consumer‐grade virtual reality (VR) systems are providing educators with new tools to interact with students. Prior to 2016, VR systems were cost‐prohibitive, limited in software selection, and complex to use. The Oculus Rift, first introduced in 2013, revolutionized the VR field by bringing forth the first affordable six‐degrees of freedom (6‐DoF) commercialized VR system. A functional 6‐DoF VR system consists of the wearable headset, two hand controllers that allow users to interact with objects in the VR space naturally, external sensors to track movements, and requires a costly VR‐ready computer system to render the graphics.In the fall of 2017, a strike affecting all colleges in Ontario limited the ability to deliver onsite lectures for five weeks. During the strike, innovative ways of lecture delivery, such as VR lectures, were successfully implemented to meet the course objectives of a first‐year dissection‐based human anatomy course. However, a notable limitation of VR technology at the time was its need to be tethered to a computer, which confined its use to a small, pre‐defined work area. The limited accessibility of the technology led to a higher amount of time and effort required to create VR‐based educational content. Recent advancements in VR technology led to a new, all‐in‐one 6‐DoF VR system. The Oculus Quest, released in May 2019, incorporates an on‐board computer, built‐in camera sensors, and an on‐board battery. The combination of these features resulted in a powerful, affordable, and portable system that can be used anywhere, anytime, with ease.The Oculus Quest was incorporated into the Fall 2019 first‐year human anatomy course. Videos re‐enforcing course lectures and major anatomical concepts, such as abdominal blood supply, the brachial plexus, and blood supply to the upper limb, were created and distributed digitally to students. Additionally, new software features allow the instructor to export the 3D models created in the VR space as a 3D file, which allows students to view the 3D models on their own VR headsets or computer devices, and 3D print the structures created in the VR space. Preliminary data suggests that students had a better understanding of anatomical concepts, relationships, and depth when using the supplementary VR resources in conjunction with traditional resources. Moreover, VR video content attained a higher viewer retention rate and viewer satisfaction when compared to 2D videos.
- Research Article
14
- 10.3991/ijoe.v9i5.2705
- Sep 15, 2013
- International Journal of Online and Biomedical Engineering (iJOE)
Virtual reality (VR) systems have the potential for alleviating the existing constraints on various natural and social resources. Currently, real-time applications of VR systems are hampered by the tediousness of creating virtual environments. Furthermore, todayâ??s VR systems only stimulate the human senses of vision, hearing and â?? to some extent touch â?? which prevents the system users to feel fully immersed in the virtual environment. By integrating real physical devices with virtual environments, the user interactions with such systems can be improved and advanced technologies such as the MS Kinect system could be used to augment the environments themselves. While existing development platforms for VR systems are expensive, game engines provide a more efficient method for integrating VR with physical devices.
 In this paper, an efficient approach for integrating virtual environments and physical devices is presented. This approach employs modifications of games that are based on commercially available game engines for implementing the virtual environments in conjunction with the application of Dynamic Link Libraries (DLLs) for realizing versatile communications between these virtual environments and various application platforms, which in turn can interact with the physical devices outside of the virtual environments. This paper is divided into four sections. In the first section, the motivation for the developments described here is discussed, followed by a description of the method used to integrate virtual environments with physical devices in the second section. In the third section, an interactive and collaborative laboratory environment based on a multi-player computer game engine that is linked to physical experimental setups is presented as an example of a VR system. In the final section, some additional promising applications of the developed platform and the corresponding challenges are briefly introduced.
- Research Article
7
- 10.3390/info11020064
- Jan 26, 2020
- Information
Research was performed in order to improve the efficiency of a user’s access to information and the interactive experience of task selection in a virtual reality (VR) system, reduce the level of a user’s cognitive load, and improve the efficiency of designers in building a VR system. On the basis of user behavior cognition-system resource mapping, a task scenario resource optimization method for VR system based on quality function deployment-convolution neural network (QFD-CNN) was proposed. Firstly, under the guidance of user behavior cognition, the characteristics of multi-channel information resources in a VR system were analyzed, and the correlation matrix of the VR system scenario resource characteristics was constructed based on the design criteria of human–computer interaction, cognition, and low-load demand. Secondly, analytic hierarchy process (AHP)-QFD combined with evaluation matrix is used to output the priority ranking of VR system resource characteristics. Then, the VR system task scenario cognitive load experiment is carried out on users, and the CNN input set and output set data are collected through the experiment, in order to build a CNN system and predict the user cognitive load and satisfaction in the human–computer interaction in the VR system. Finally, combined with the task information interface of a VR system in a smart city, the application research of the system resource feature optimization method under multi-channel cognition is carried out. The results show that the test coefficient CR value of the AHP-QFD model based on cognitive load is less than 0.1, and the MSE of CNN prediction model network is 0.004247, which proves the effectiveness of this model. According to the requirements of the same design task in a VR system, by comparing the scheme formed by the traditional design process with the scheme optimized by the method in this paper, the results show that the user has a lower cognitive load and better task operation experience when interacting with the latter scheme, so the optimization method studied in this paper can provide a reference for the system construction of virtual reality.
- Research Article
5
- 10.1155/2022/1270565
- Feb 27, 2022
- Mathematical Problems in Engineering
Job-related vision standards have become an increasing concern in recent years. Mobile visual acuity measurements enable early detection and diagnosis of visual impairments and are being used around the world. However, the reliability of mobile visual acuity testing has not yet been fully demonstrated. A simple virtual reality (VR) system combining a mobile phone and a VR cardboard device has the potential as a reliable visual acuity evaluation system due to its fully controlled environment. Visual acuity measurements taken via this type of VR system were evaluated by comparing them with those obtained using the traditional Snellen chart. This study gathered data according to different parameters, including right or left eye, with or without corrective vision devices, and the learning effects of the system. The results showed that the VR system had an accuracy of up to 96.43% and 92.86% for the left and right eyes, respectively, for participants not using corrective devices. In the same group, the proposed system provided significant correlation results for Spearman’s r parameters for the left and right eyes (0.7342 and 0.8188, respectively), as compared to those obtained using a traditional approach. Therefore, despite some limitations, a mobile VR system has potential as a self-diagnostic tool for rapid, low-cost visual acuity measurements in a fully controlled environment as well as for providing historical vision data and tracing for the early detection of visual impairments or conditions.
- Conference Article
- 10.1117/12.2644156
- Jan 4, 2023
A study was made of the causes of the vergence-accommodation conflict of human vision in virtual and mixed reality systems. The technical and algorithmic approaches to reduce and eliminate the vergence-accommodation conflict in virtual reality systems are considered. As a technical solution, an approach was chosen that provides adaptive focusing of the eyepiece of the virtual reality system to the point of convergence of the human eyes, determined by the tracking system of his pupils. Possible algorithmic solutions are considered that provide focusing of the virtual reality image in accordance with the expected accommodation of human eyes. The main solutions are the classical solution of image filtering in accordance with the defocusing caused by natural accommodation at a given distance, and the solution in which the corresponding filtering is performed using neural network technologies. The advantages and disadvantages of the proposed solutions are considered. As a criterion of correctness, we used a visual comparison of the results of image defocusing with the solution obtained by the method of physically correct rendering using the human eye model. As a basis for physically correct rendering, the method of bidirectional stochastic ray tracing with backward photon maps was used. The paper presents an analysis of the advantages and disadvantages of the proposed solutions.
- Book Chapter
1
- 10.1007/978-3-030-49695-1_35
- Jan 1, 2020
Virtual Reality (VR) and Augmented Reality (AR) can be defined by the amount of virtual elements displayed to a human’s senses: VR is completely synthetic and AR is partially synthetic. This paper compares VR and AR systems for variations of three ball-sorting task scenarios, and evaluates both user performance and reaction (i.e., simulator sickness and immersion). The VR system scored higher, with statistical significance, than the AR system in terms of effectiveness per each scenario and completion rate of all scenarios. The VR system also scored significantly lower than the AR system in terms of percentage error and total false positives. The VR group scored significantly lower than the AR group in efficiency performance: the VR group had less time spent in each scenario, less total time duration, and higher overall relative efficiency. Although post-scenario simulator sickness did not differ significantly between VR and AR, the VR condition had an increase in disorientation from pre-to-post scenarios. Significant correlations between performance effectiveness and post-scenario simulator sickness were not found. Finally, the AR system scored significantly higher on the immersion measure item for the level of challenge the scenarios provided. AR interface issues are discussed as a potential factor in performance decrement, and AR interface solutions are given. AR may be preferred over VR if disorientation is a concern. Study limits include causality ambiguity and experimental control. Next steps include testing VR or AR systems exclusively, and testing whether the increased challenge from the AR immersion is beneficial to educational applications.
- Conference Article
4
- 10.1109/icfsp48124.2019.8938052
- Sep 1, 2019
Head motions classification applied to virtual reality (VR) systems is still an open problem without a leading pattern recognition solution. In contrary to typical motion capture pattern recognition problem in this case we use only single inertial measurement unit (IMU) sensor. Head motions that we want to recognize in VR systems might be both natural head motions like nodding or shaking head (they might be used while interacting with VR avatars) and also elements of head-based navigation system or interface. The second type of actions is more challenging because it might contains actions that generate motion trajectories that do not appear in real-life, though they have to be possible to execute only by using a head. In this paper we propose a trajectory-based motion features description that is utilized by dynamic time warping (DTW) classificator. The training of the classificator requires using modified dynamic time warping barycenter averaging (DBA) heuristic algorithm which utilizes quaternions to represents rotations. The proposed pattern recognition system together with its evaluation on the set of head motions acquired by VR system is our original contribution. We have evaluated our method on dataset consisted of 8 types of motions performed by two persons (there are 160 motions samples). In leave-one-out evaluation we have obtained very good results: only 10% of one and 15% of another action has been incorrectly classified, while remaining 6 actions classes had been 100% correctly classified. Both dataset and implementation of proposed method can be downloaded, due to this our experiment can be reproduced.
- Conference Article
1
- 10.1109/bmsb49480.2020.9519726
- Oct 27, 2020
Virtual Reality (VR) systems are currently limited in either processing power, portability or functionality. 5G networking, with super high data rates and ultra-low latency, is expected to revolutionise much of what we do, notably transforming VR experiences. The Internet of Radio Light (IoRL) project presents a 5G architecture that could further enhance VR experiences by bridging gaps between various VR technologies and reducing current restrictions. This could enable a single IoRL VR system, capable of combining the significant processing performance of PC operated VR systems with similar physical freedoms offered by standalone VR headsets, as well as delivering equally impressive VR experiences to mobile users. Most notably, the IoRL project combines both Visible Light Communication (VLC) and mmWave technology to produce an Indoor Positioning System (IPS) which, as presented in earlier works, poses an opportunity for a novel VR tracking method. This paper explores the possibilities of an IoRL VR system and proposes a model and solution to evaluate the concept validity. The obtained results reflect that while this system is effective for 5G wireless localisation, further work is required to meet VR requirements.
- Research Article
33
- 10.1162/105474602760204291
- Aug 1, 2002
- Presence: Teleoperators and Virtual Environments
The development and maintenance of a virtual reality (VR) system requires indepth knowledge and understanding in many different disciplines. Three major features that distinguish VR systems are real-time performance while maintaining acceptable realism and presence, objects with two clearly distinct yet inter-related aspects like geometry/structure and function/behavior, and the still experimental nature of multi-modal interaction design. Until now, little attention has been paid to methods and tools for the structured development of VR software that addresses these features. Many VR application development projects proceed by modeling needed objects on conventional CAD systems, then programming the system using simulation packages. Usually, these activities are carried out without much planning, which may be acceptable for only small-scale or noncritical demonstration systems. However, for VR to be taken seriously as a media technology, a structural approach to developing VR applications is required for the construction of large-scale VR worlds, and this will undoubtedly involve and require complex resource management, abstractions for basic system/object functionalities and interaction tasks, and integration and easy plug-ins of different input and output methods. In this paper, we assembled a comprehensive structured methodology for building VR systems, called CLEVR (Concurrent and LEvel by Level Development of VR System), which combines several conventional and new concepts. For instance, we employ concepts such as the simultaneous consideration of form, function, and behavior, hierarchical modeling and top-down creation of LODs (levels of detail), incremental execution and performance tuning, user task and interaction modeling, and compositional reuse of VR objects. The basic underlying modeling approach is to design VR objects (and the scenes they compose) hierarchically and incrementally, considering their realism, presence, behavioral correctness, performance, and even usability in a spiral manner. To support this modeling strategy, we developed a collection of computeraided tools called P-VoT (POSTECH-Virtual reality system development Tool). We demonstrate our approach by illustrating a step-by-step design of a virtual ship simulator using the CLEVR/P-VoT, and demonstrate the effectiveness of our method in terms of the quality (performance and correctness) of the resulting software and reduced effort in its development and maintenance.
- Research Article
51
- 10.1109/rbme.2017.2749527
- Jan 1, 2017
- IEEE reviews in biomedical engineering
Many virtual and augmented reality systems have been proposed to support renal interventions. This paper reviews such systems employed in the treatment of renal cell carcinoma and renal stones. A systematic literature search was performed. Inclusion criteria were virtual and augmented reality systems for radical or partial nephrectomy and renal stone treatment, excluding systems solely developed or evaluated for training purposes. In total, 52 research papers were identified and analyzed. Most of the identified literature (87%) deals with systems for renal cell carcinoma treatment. About 44% of the systems have already been employed in clinical practice, but only 20% in studies with ten or more patients. Main challenges remaining for future research include the consideration of organ movement and deformation, human factor issues, and the conduction of large clinical studies. Augmented and virtual reality systems have the potential to improve safety and outcomes of renal interventions. In the last ten years, many technical advances have led to more sophisticated systems, which are already applied in clinical practice. Further research is required to cope with current limitations of virtual and augmented reality assistance in clinical environments.
- Research Article
- 10.1007/s10055-025-01309-8
- Jan 25, 2026
- Virtual Reality
The sophisticated and highly evolving nature of cyber threats, along with their substantial social and financial impact, requires business and government organizations to adopt proactive measures to sustain robust cybersecurity defenses. Employees act as the first line of defense against cyber-attacks, ensuring their cybersecurity awareness is of vital importance. Conventional training techniques, comprising textual content and passive lectures, fall short in providing sustained user engagement, behavioral adaptations, long-lasting knowledge retention, and other key outcomes required to equip humans with efficient cyber defense skills. So, there is a critical need for innovative educational approaches that can deal with such issues. This study presents a novel cybersecurity educational framework designed to empower users to identify and thwart cyberattacks through immersive Virtual Reality (VR) training systems. This study proposes and evaluates two distinct VR learning environments: a desktop VR setup employing Leap Motion hand gesture-based interaction and an immersive VR (IVR) system utilizing head-mounted displays and hand controllers to enhance cybersecurity learning, engagement, and knowledge retention. These VR systems are meticulously compared to traditional textbook and video tutorial-based learning methods based on learning effectiveness, engagement, and retention, and evaluated for improved usability and spatial presence. Experiments demonstrate significant results for both VR systems against video and textbook learning techniques regarding comprehension of cybersecurity concepts, user engagement, and ease of use. Particularly, the IVR system outperformed in knowledge retention, followed by Desktop VR, video tutorial, and textbook learning, respectively. Usability assessments result in excellent user satisfaction for both VR setups, while spatial presence was significantly stronger in the IVR system. These findings affirm that the proposed VR systems offer an effective and engaging solution for cybersecurity training in educational and potentially organizational situations. Specifically, the use of Desktop VR offers a more cost-effective and accessible alternative, suggesting a pathway toward greater scalability.
- Research Article
5
- 10.1177/1071181319631080
- Nov 1, 2019
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Virtual reality (VR) is receiving enough attention to be regarded as a revival era and technologies related to the implementation of VR systems continue to evolve. VR systems are applied not only in entertainment but also in various fields such as medicine, rehabilitation, education, engineering, and military (Aïm, Lonjon, Hannouche, & Nizard, 2016; Howard, 2017; Lele, 2013). In particular, low-cost and immersive VR systems are commercialized to the general public, accelerating the revival of VR (Wang & Lindeman, 2015). In VR system, the research from the viewpoint of human–computer interaction and user experience (UX) is required to provide a high sense of immersion to the user. Therefore, the purpose of this study is to provide a structural methodology for classifying current VR researches and to review UX evaluation of VR systems systematically to identify research trends and to clarify future research directions. This study followed systematic review protocol of (PRISMA) (Liberati et al., 2009). To cover a broad spectrum of perspectives of engineering and medical fields, six web databases were selected: Scopus, Web of Science, Science direct, IEEE Xplore, EBSCO, and ProQuest. The main search keywords were virtual reality and user experience. These two words can be used in acronyms or other words. As a result, four and three words were chosen for virtual reality and user experience, respectively (‘virtual reality’, ‘virtual environment’, ‘VR,’ and ‘VE’ were chosen as keywords for virtual reality, ‘user experience’, ‘UX,’ and ‘human experience’). In addition, the journal articles in English were searched only. After the screening process was completed, final articles were selected based on the full-text. In this process, there were two essential selection conditions. The selected articles should use VR system and evaluate UX component. No restrictions other than these conditions were made. As a result, 78 articles were found to be consistent with the purpose of this study. As a result, there were two main points of discussion about UX studies in a VR system. The first is related to the implementation of equipment and technology including input devices, output devices, feedback forms, platforms, and applications. The other is related to research methods including user characteristics, interactions, and evaluation method. With respect to hand input devices, conventional input devices such as keyboards and game pads were used in many cases compared to trackable devices. However, as implementation techniques for natural interaction such as gesture recognition or real-time tracking of the body parts have been extensively developed, UX research needs to be conducted on VR systems that apply these techniques. In relation to feedback, stimuli other than visual stimuli were not frequently provided. Since providing multiple types of stimuli simultaneously may increase the user’s immersion and sense of reality, it is necessary to intensively study the effect of multi-sensory feedback in the future. In addition, there is a lack of academic research on CAVE and motion platforms. Though CAVE and motion platforms are difficult to set up for experimentation because they are expensive to build and require large space, there is a need to continually expand the UX research on this platform since the public will have more opportunities to access these platforms. Regarding research methods, most of the studies have focused on subjective measurements, quantitative research, laboratory experiments, and episode UX. To comprehensively understand the overall UX, it is necessary to conduct a qualitative study such as observation of subjects experiencing a VR system, think aloud, or deep interview with them, rather than evaluating UX only through a questionnaire. In addition, there was no case in which UX was evaluated in terms of momentary UX. However, there is a limit to evaluating the subjective measurement such as immersion, presence, and motion sickness during usage by directly asking the user, since the VR system provides an immersive environment to the user. Thus, behavioral characteristic or physiological signal of users can be used as one of the evaluation indicators of these measurements. Today, new VR systems are emerging and VR-related technologies are expected to evolve steadily. In this context, the findings can contribute to future research directions and provide insights into conducting UX evaluation in VR system.
- Research Article
1
- 10.20870/ijvr.1997.3.1.2617
- Jan 1, 1997
- International Journal of Virtual Reality
Cartesian control algorithms are presented for six degree of freedom (6-DOF) force-reflecting hand-controllers (FRHCs) used for simultaneous operator position/orientation (or rate) commands to a virtual reality (VR) system and virtual force/moment kinesthetic reflection to the operator. The commands and kinesthetic feedback are transferred in Cartesian space. The task force/moment (wrench) dominates while features are provided to reduce operator loading: virtual payload and FRHC gravity compensation, input channels to easily separate 6-DOF inputs with one hand, constant-force return-to-center, and FRHC damping to improve relative stability. In experimental implementation, the "VR system" was a real remotely located teleoperated robotic system with real sensed task wrenches. Experimental results show that the algorithms are effective for reduced contact wrenches and increased telepresence quality in practical tasks. The methods in this paper are suitable for kinesthetic haptic display in virtual environments.
- Conference Article
- 10.1117/12.2607523
- Mar 4, 2022
Virtual reality (VR) systems bring fantastic immersive experiences to users in multiple fields. However, the performance of VR displays is still troubled by several factors, including inadequate resolution, noticeable chromatic aberration, and low optical efficiency. Pancharatnam-Berry phase optical element (PBOE) exhibits several advantages, such as high efficiency, simple fabrication process, compact, and lightweight, which is an excellent candidate for VR systems. We have demonstrated that by using three kinds of PBOEs, the above-mentioned problems can be solved satisfactorily. The first PBOE is PB grating/deflector (PBD), which can deflect the left-handed and the right-handed circularly polarized beams to two opposite directions. Therefore, if we insert a PBD to the VR system and carefully design the deflection angle, it can optically separate each display pixel into two virtual pixels and superimpose them to obtain a higher pixel density. In this way, the pixel per inch (PPI) of the original display can be doubled. The second PBOE is PB lens (PBL). As one kind of diffractive optical lenses, it has an opposite chromatic dispersion to that of a refractive lens. When a PBL with an appropriate focal length is hybridized with a refractive Fresnel lens, the system’s chromatic aberration can be significantly reduced. The third PBOE is multi-domain PB lens. The effective focal length of each domain can be customized independently. This multi-domain PBL can function as a diffractive deflection film in the VR system. If such a diffractive deflection film is cooperated with a directional backlight, the etendue wasting can be reduced prominently, and more than doubled optical efficiency can be achieved in both Fresnel and “Pancake” VR systems. These ultrathin PBOEs will find promising applications in future VR systems
- Research Article
2
- 10.1016/j.dib.2025.111827
- Aug 1, 2025
- Data in brief
Multimodal cross-system virtual reality (VR) ball throwing dataset for VR biometrics.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.