Abstract

AbstractThis paper is a review and analysis of the various implementation architectures of diffractive waveguide combiners for augmented reality (AR), mixed reality (MR) headsets, and smart glasses. Extended reality (XR) is another acronym frequently used to refer to all variants across the MR spectrum. Such devices have the potential to revolutionize how we work, communicate, travel, learn, teach, shop, and are entertained. Already, market analysts show very optimistic expectations on return on investment in MR, for both enterprise and consumer applications. Hardware architectures and technologies for AR and MR have made tremendous progress over the past five years, fueled by recent investment hype in start-ups and accelerated mergers and acquisitions by larger corporations. In order to meet such high market expectations, several challenges must be addressed: first, cementing primary use cases for each specific market segment and, second, achieving greater MR performance out of increasingly size-, weight-, cost- and power-constrained hardware. One such crucial component is the optical combiner. Combiners are often considered as critical optical elements in MR headsets, as they are the direct window to both the digital content and the real world for the user’s eyes.Two main pillars defining the MR experience are comfort and immersion. Comfort comes in various forms: –wearable comfort—reducing weight and size, pushing back the center of gravity, addressing thermal issues, and so on–visual comfort—providing accurate and natural 3-dimensional cues over a large field of view and a high angular resolution–vestibular comfort—providing stable and realistic virtual overlays that spatially agree with the user’s motion–social comfort—allowing for true eye contact, in a socially acceptable form factor.Immersion can be defined as the multisensory perceptual experience (including audio, display, gestures, haptics) that conveys to the user a sense of realism and envelopment. In order to effectively address both comfort and immersion challenges through improved hardware architectures and software developments, a deep understanding of the specific features and limitations of the human visual perception system is required. We emphasize the need for a human-centric optical design process, which would allow for the most comfortable headset design (wearable, visual, vestibular, and social comfort) without compromising the user’s sense of immersion (display, sensing, and interaction). Matching the specifics of the display architecture to the human visual perception system is key to bound the constraints of the hardware allowing for headset development and mass production at reasonable costs, while providing a delightful experience to the end user.

Highlights

  • Defense has been historically the first application sector for augmented reality (AR) and virtual reality (VR), as far back as the 1950s [1]

  • Due to the lack of available consumer display technologies and related sensors, novel optical display concepts were introduced throughout the 1990s [3, 4] that are still considered as state of the art, such as the “Private Eye” smart glass from Reflection Technology (1989) and the “Virtual Boy” from Nintendo (1995)— both based on scanning displays rather than flat-panel displays

  • Due to the strong spectral spread of the in-coupler elements, the individual color fields are coupled at higher angles as the wavelength increases, which reduces the overall RGB FOV overlap that can propagate in the guide within the TIR conditions

Read more

Summary

Introduction

Defense has been historically the first application sector for augmented reality (AR) and virtual reality (VR), as far back as the 1950s [1]. HMD display architectures are evolving slowly to more specific technologies, which may be a better fit for immersive requirements than flat panels are, sometimes resembling the display technologies invented throughout the first AR/VR boom two decades earlier (inorganic mu-iLED panels, 1-dimensional [1D] scanned arrays, 2-dimension (2D) laser/ vertical-cavity surface-emitting laser [VCSEL] microelectromechanical system [MEMS] scanners, and so on). Such traditional display technologies will serve as an initial catalyst for what is coming next. Different viewers of that same physical display see different information, tuned to their specific interest, depending on their physical location

The emergence of MR as the next computing platform
Display immersion
Functional optical building blocks of an MR headset
Display engine optical architectures
Combiner optics and exit pupil expansion
Waveguide combiners
Curved waveguide combiners and a single exit pupil
Continuum from flat to curved waveguides and extractor mirrors
One-dimensional EB expansion
Two-dimensional EB expansion
Choosing the right waveguide coupler technology
Achromatic coupler technologies
Summary of waveguide coupler technologies
Design and modeling of optical waveguide combiners
Increasing FOV by using illumination spectrum
Increasing FOV by optimizing grating coupler parameters
Using dynamic couplers to increase waveguide combiner functionality
Choosing the waveguide coupler layout architecture
Building a uniform EB
Spectral spread compensation in diffractive waveguide combiners
Field spread in waveguide combiners
Focus spread in waveguide combiners
Propagating full color images in the waveguide combiner over a maximum FOV
Waveguide-coupler lateral geometries
Reducing the number of plates for RGB display and maintain FOV reach
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call