Abstract

Recently, M. Tuceryan and N. Navab (2000) introduced a method for calibrating an optical see-through system based on the alignment of a set of 2D markers on the display with a single point in the scene, while not restricting the user's head movements (the single point active alignment method or SPAAM). This method is applicable with any tracking system, provided that it gives the pose of the sensor attached to the see-through display. When cameras are used for tracking, one can avoid the computationally intensive and potentially unstable pose estimation process. A vision-based tracker usually consists of a camera attached to the optical see-through display, which observes a set of known features in the scene. From the observed locations of these features, the pose of the camera can be computed. Most pose computation methods are very involved and can be unstable at times. The authors propose to keep the projection matrix for the tracker camera without decomposing it into intrinsic and extrinsic parameters and use it within the SPAAM method directly. The propagation of the projection matrices from the tracker camera to the virtual camera, representing the eye and the optical see-through display combination as a pinhole camera model, allows us to skip the most time consuming and potentially unstable step of registration, namely, estimating the pose of the tracker camera.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.