Abstract

Head-mounted displays (HMDs) are becoming increasingly popular as a crucial component of virtual reality (VR). However, contemporary HMDs enforce a simple optical structure due to their constrained form factor, which impedes the use of multiple lens elements that can reduce aberrations in general. As a result, they introduce severe aberrations and imperfections in optical imagery, causing visual fatigue and degrading the immersive experience of being present in VR. To address this issue without modifying the hardware system, we present a novel, to the best of our knowledge, software-driven approach that compensates for the aberrations in HMDs in real time. Our approach involves pre-correction that deconvolves an input image to minimize the difference between its after-lens image and the ideal image. We characterize the specific wavefront aberration and point spread function (PSF) of the optical system using Zernike polynomials. To achieve higher computational efficiency, we improve the conventional deconvolution based on hyper-Laplacian prior by adopting a regularization constraint term based on L2 optimization and the input-image gradient. Furthermore, we implement our solution entirely on a graphics processing unit (GPU) to ensure constant and scalable real-time performance for interactive VR. Our experiments evaluating our algorithm demonstrate that our solution can reliably reduce the aberration of the after-lens images in real time.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.