Abstract

Augmented reality is the name given to the process of seamlessly adding computer-generated information to the real world. A common example is projecting texture onto 3D objects such as archeological artifacts or archival materials that are too fragile to touch. Texture mapping of virtual objects is relatively straightforward because all relevant information—e.g., color and geometry—is perfectly known. In contrast, texturing real, but unfamiliar 3D objects poses a number of challenges. A successful solution to the problem would have a wide range of applications, including sensing, industrial inspection of manufactured parts, reverse engineering, object recognition, as well as in clothing design, virtual museums, and the film industry. Texturing an object requires estimating its pose (i.e., position and orientation) vis-a-vis the projector used for patterning. The task is particularly difficult because there are no direct relationships between projector and scene. Points of correspondence must be found to ensure acceptable results. This can be accomplished manually, as shown, for example, by a project to illuminate a model of the Taj Mahal, where moving a crosshair projected onto the physical object registers (aligns) points of interest.1 An alternative system, called DOME, employs a back-projection screen shaped into a curved surface2 that makes it easy to establish the relationships between projector and object. Other approaches use physical markers3 such as pins. Still others manage to project texture by ‘confounding’ the silhouette of the real object with that of the virtually textured object. The problem with all of these techniques is that they either require human intervention or are limited to planar scenes. We propose an automatic method of adequately projecting texture onto real objects that requires no prior knowledge of the exact pose or any use of physical markers. Our approach, which uses two cameras and one projector, can be generalized to any number of cameras and projectors. We estimate the pose Figure 1. Structured light and 3D reconstruction. Coded structured light is projected onto a textureless model. Cameras then record images from which points of correspondence are extracted, and the 3D position of the points is estimated. Finally, a scan of the model is aligned with the 3D reconstruction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call