Abstract

We present a novel catadioptric-stereo rig consisting of a coaxially-aligned perspective camera and two spherical mirrors with distinct radii in a “folded” configuration. We recover a nearly-spherical dense depth panorama (360°×153°) by fusing depth from optical flow and stereo. We observe that for motion in a horizontal plane, optical flow and stereo generate nearly complementary distributions of depth resolution. While optical flow provides strong depth cues in the periphery and near the poles of the view-sphere, stereo generates reliable depth in a narrow band about the equator. We exploit this principle by modeling the depth resolution of optical flow and stereo in order to fuse them probabilistically in a spherical panorama. To aid the designer in achieving a desired field-of-view and resolution, we derive a linearized model of the rig in terms of three parameters (radii of the two mirrors plus axial separation from their centers). We analyze the error due to the violation of the Single Viewpoint (SVP) constraint and formulate additional constraints on the design to minimize the error. Performance is evaluated through simulation and with a real prototype by computing dense spherical panoramas in cluttered indoor settings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call