Abstract

We present an all-passive, transformable optical mapping (ATOM) near-eye display based on the “human-centric design” principle. By employing a diffractive optical element, a distorted grating, the ATOM display can project different portions of a two-dimensional display screen to various depths, rendering a real three-dimensional image with correct focus cues. Thanks to its all-passive optical mapping architecture, the ATOM display features a reduced form factor and low power consumption. Moreover, the system can readily switch between a real-three-dimensional and a high-resolution two-dimensional display mode, providing task-tailored viewing experience for a variety of VR/AR applications.

Highlights

  • We present an all-passive, transformable optical mapping (ATOM) near-eye display based on the “human-centric design” principle

  • Very few virtual reality (VR)/AR devices are crafted to comply with the “human-centric design” principle[1], a rising consensus that the hardware design should center around the human perception[2]

  • The temporal-multiplexing-based methods are limited by the fact that product of the image dynamic range, depth plane number and the volumetric display rate cannot be greater than the maximum binary pattern rate of the digital micromirror device (DMD)

Read more

Summary

Introduction

We present an all-passive, transformable optical mapping (ATOM) near-eye display based on the “human-centric design” principle. Very few VR/AR devices are crafted to comply with the “human-centric design” principle[1], a rising consensus that the hardware design should center around the human perception[2] To meet this standard, a near-eye display must integrate displays, sensors, and processors in a compact enclosure, while allowing for a user-friendly human-computer interaction. The viewer is forced to adapt to conflicting cues between the display and the real world, causing discomfort and fatigue This problem originates from the mismatch between the fixed depth of the display screen (i.e., accommodation distance) and the depths of the depicted scenes (i.e., vergence distance) specified by the focus cues. The temporal-multiplexing-based methods are limited by the fact that product of the image dynamic range, depth plane number and the volumetric display rate cannot be greater than the maximum binary pattern rate of the DMD. The LCOS-SLM-based approach relies on an active optical component (LCOS-SLM) to execute its core function, unfavorably increasing power consumption and the device’s form factor[46]

Methods
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.