Abstract

The article represents a method to perform the Shape From Silhouette (SFS) of human, based on gravity sensing. A network of cameras is used to observe the scene. The extrinsic parameters among the cameras are initially unknown. An IMU is rigidly coupled to each camera in order to provide gravity and magnetic data. By applying a data fusion between each camera and its coupled IMU, it becomes possible to consider a downward-looking virtual camera for each camera within the network. Then extrinsic parameters among virtual cameras are estimated using the heights of two 3D points with respect to one camera within the network. Registered 2D points on the image plane of each camera is reprojected to its virtual camera image plane, using the concept of infinite homography. Such a virtual image plane is horizontal with a normal parallel to the gravity. The 2D points from the virtual image planes are back-projected onto the 3D space in order to make conic volumes of the observed object. From intersection of the created conic volumes from all cameras, the silhouette volume of the object is obtained. The experimental results validate both feasibility and effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call