Abstract

Abstract. We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or – like omnidirectional cameras – to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations – especially rotations – one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug 3 from Point Grey.

Highlights

  • The paper presents a novel approach for the bundle adjustment for omnidirectional and multi-view cameras, which enables the use of image and scene points at infinity, called “BACS” (Bundle Adjustment for Camera Systems)

  • In order to exploit the power of bundle adjustment, it needs to be extended to handle multi-camera systems and image and scene points at infinity, see Fig. 1

  • We proposed a rigorous bundle adjustment for omnidirectional and multi-view cameras which enables an efficient maximumlikelihood estimation with image and scene points at infinity

Read more

Summary

INTRODUCTION

The classical collinearity equations for image points xit([xit; yit]) of scene point Xi([Xi; Yi; Zi]) in camera t with rotation matrix Rt([rkk ]) with k and k = 1, ..., 3 and projection center Zt([X0t; Y0t; Z0t]) read as xit These equations are not useful for far points or ideal points, as small angles between rays lead to numerical instabilities or singularities. With the I scene points Xi, i = 1, ..., I, the T motions Mt, t = 1, ..., T of the camera systems from origin, the projection Ptc into the cameras c = 1, ..., C possibly varying over time, and the observed image points xitc of scene point i in camera c at time/pose t For realizing this we need to be able to represent bundles of rays together with their uncertainty, using uncertain direction vectors, to represent scene points at infinity using homogeneous coordinates, and minimize the number of parameters to be estimated. The following developments are based on the minimal representation schemes proposed in Forstner (2012) which reviews previous work and generalizes e. g. Bartoli (2002)

Model for sets of single cameras
Model for sets of camera systems
Generating camera directions from observed image coordinates
The estimation procedure
Implementation details
Test on correctness and feasibility
Decrease of rotational precision excluding far points
CONCLUSIONS AND FUTURE WORK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call