Abstract

A novel approach for three-dimensional (3D) volumetric reconstruction of an object inside a scene is proposed. A camera network is used to observe the scene. Each camera within the network is rigidly coupled with an Inertial Sensor (IS). A virtual camera is defined for each IS-camera couple using the concept of infinite homography, by fusion of inertial and visual information. Using the inertial data and without planar ground assumption, a set of virtual horizontal planes are defined. The intersections of these inertial-based virtual planes with the object are registered using the concept of planar homography. Moreover a method to estimate the translation vectors among virtual cameras is proposed, which just needs the relative heights of two 3D points in the scene with respect to one of the cameras and their correspondences on the image planes. Different experimental results for the proposed 3D reconstruction method are provided on two different types of scenarios. In the first type, a single IS-camera couple is used and placed in different locations around the object. In the second type, the 3D reconstruction of a walking person (dynamic case) is performed where a set of installed cameras in a smart-room is used for the data acquisition. Moreover, a set of experiments are simulated to analyse the accuracy of the translation estimation method. The experimental results show the feasibility and effectiveness of the proposed framework for the purpose of multi-layer data registration and volumetric reconstruction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call