Abstract

We describe an augmented reality system for superimposing three-dimensional (3-D) live content onto two-dimensional fiducial markers in the scene. In each frame, the Euclidean transformation between the marker and the camera is estimated. The equivalent virtual view of the live model is then generated and rendered into the scene at interactive speeds. The 3-D structure of the model is calculated using a fast shape-from-silhouette algorithm based on the outputs of 15 cameras surrounding the subject. The novel view is generated by projecting rays through each pixel of the desired image and intersecting them with the 3-D structure. Pixel color is estimated by taking a weighted sum of the colors of the projections of this 3-D point in nearby real camera images. Using this system, we capture live human models and present them via the augmented reality interface at a remote location. We can generate 384/spl times/288 pixel images of the models at 25 fps, with a latency of <100 ms. The result gives the strong impression that the model is a real 3-D part of the scene.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call