Abstract
Augmented reality head-up displays (HUDs) require virtual-object distance matching to the real scene along an adequate field of view (FoV). At the same time, pupil-replication-based waveguide systems provide a wide FoV while affording compact HUDs. To provide 3D imaging and enable virtual-object distance matching in such waveguide systems, we propose a time-sequential autostereoscopic imaging architecture using a synchronized multi-view picture generation and eyebox formation units. Our simulation setup to validate the system feasibility yields an FoV of 15° × 7.5° with clear crosstalk-less images with a resolution of 60 pix/deg for each eye. Our proof-of-concept prototype with reduced specs yields results that are consistent with the simulation in terms of the viewing-zone formation. Thus, viewing zones for the left and right eyes in plane of the eyebox can be clearly observed. Finally, we discuss how the initial distance of the virtual image can be set for quantified fatigue-free 3D imaging, and an FoV can be further extended in such types of waveguide systems by varying parameters of the eyebox formation unit.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.