Abstract
This article discusses the depth range which automultiscopic 3D (A3D) displays should reproduce for ensuring an adequate perceptual quality of substantially deep scenes. These displays usually need sufficient depth reconstruction capabilities covering the whole scene depth, but due to the inherent hardware restriction of these displays this is often difficult, particularly for showing deep scenes. Previous studies have addressed this limitation by introducing depth compression that contracts the scene depth into a smaller depth range by modifying the scene geometry, assuming that the scenes were represented as CG data. The previous results showed that reconstructing only a physical depth of 1 m is needed to show scenes with much deeper depth and without large perceptual quality degradation. However, reconstructing a depth of 1 m is still challenging for actual A3D displays. In this study, focusing on a personal viewing situation, we introduce a dynamic depth compression that combines viewpoint tracking with the previous approach and examines the extent to which scene depths can be compressed while keeping the original perceptual quality. Taking into account the viewer's viewpoint movements, which were considered a cause of unnaturalness in the previous approach, we performed an experiment with an A3D display simulator and found that a depth of just 10 cm was sufficient for showing deep scenes without inducing a feeling of unnaturalness. Next, we investigated whether the simulation results were valid even on a real A3D display and found that the dynamic approach induced better perceptual quality than the static one even on the real A3D display and that it had a depth enhancing effect without any hardware updates. These results suggest that providing a physical depth of 10 cm on personalized A3D displays is general enough for showing any deeper 3D scenes with appealing subjective quality.
Highlights
A key feature of automultiscopic 3D (A3D) displays is that they present a 3D scene as if it naturally existed in a physical space without any need for special glasses
These results demonstrate that the dynamic approach was able to suppress the unnaturalness efficiently and provided a significantly high rate of depth compression, about 1 / 500, as the far scenes with a depth of 54.5 m were compressed to 10 cm
We found no significant difference of the maximum horizontal angles observed with the static and dynamic approaches in the test stimuli of each trial (Wilcoxon signed-rank test, pp > 0.05; false discovery rate (FDR) corrected)
Summary
A key feature of automultiscopic 3D (A3D) displays is that they present a 3D scene as if it naturally existed in a physical space without any need for special glasses. The displays provide depth cues of binocular disparities and motion parallax, which is induced by a viewing point movement while looking at the presented 3D images. A3D displays using light field technologies like integral photography [1] avoid the vergence-accommodation conflict (VAC) [2] that brings visual discomfort [3], which is one of the issues with stereoscopic vision using special glasses. These characteristics can be achieved by volumetric displays [4] that directly show voxels on a physical space by projecting light into physical objects [5], [6] or by using light emission [7], [8]. A3D displays have ideal characteristics for 3D visualization while the stereoscopic and autostereoscopic displays cannot provide them in theory
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on visualization and computer graphics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.