Abstract

The article represents a method to perform the Shape From Silhouette (SFS) of human, based on gravity sensing. A network of cameras is used to observe the scene. The extrinsic parameters among the cameras are initially unknown. An IMU is rigidly coupled to each camera in order to provide gravity and magnetic data. By applying a data fusion between each camera and its coupled IMU, it becomes possible to consider a downward-looking virtual camera for each camera within the network. Then extrinsic parameters among virtual cameras are estimated using the heights of two 3D points with respect to one camera within the network. Registered 2D points on the image plane of each camera is reprojected to its virtual camera image plane, using the concept of infinite homography. Such a virtual image plane is horizontal with a normal parallel to the gravity. The 2D points from the virtual image planes are back-projected onto the 3D space in order to make conic volumes of the observed object. From intersection of the created conic volumes from all cameras, the silhouette volume of the object is obtained. The experimental results validate both feasibility and effectiveness of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.