Abstract
A full view spherical camera exploits its extended field of view to map the complete environment onto a 2D image plane. Thus, with a single shot, it delivers a lot more information about the surroundings than one can gather with a normal perspective or plenoptic camera, which are commonly used in light field imaging. However, in contrast to a light field camera, a spherical camera does not capture directional information about the incident light, and thus a single shot from a spherical camera is not sufficient to reconstruct 3D scene geometry. In this paper, we introduce a method combining spherical imaging with the light field approach. To obtain 3D information with a spherical camera, we capture several independent spherical images by applying a constant vertical offset between the camera positions and combine the images in a Spherical Light Field (SLF). We can then compute disparity maps by structure tensor orientation analysis on epipolar plane images, which in this context are 2D cuts through the spherical light field with constant azimuth angle. This method competes with the acquisition range of laser scanners and allows for a fast and extensive recording of a given scene. We benchmark our approach by comparing disparity maps of ray-traced scenes against its ground truth. Further we provide disparity maps of real world datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.