Abstract

This paper proposes a framework to perform volumetric 3D reconstruction using a camera network. A network of cameras observes a scene and each camera is rigidly coupled with an Inertial Sensor (IS). The 3D orientation provided by IS is used firstly for definition of a virtual camera network whose axis are aligned to the earth cardinal directions. Then a set of virtual planes are defined for the sake of 3D reconstruction with no planar ground assumption and just by using 3D orientation data provided by IS. A GPU-based implementation of the proposed method is provided to demonstrate the promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call