Abstract

Traditionally, structured light methods have been studied in rigid configurations. In these configurations the position and orientation between the light emitter and the camera are fixed and known beforehand. In this paper we break with this rigidness and present a new structured light system in non-rigid configuration. This system is composed by a wearable standard perspective camera and a simple laser emitter. Our non-rigid configuration permits free motion of the light emitter with respect to the camera. The point-based pattern emitted by the laser permits us to easily establish correspondences between the image from the camera and a virtual one generated from the light emitter. Using these correspondences, our method computes rotation and translation up to scale of the planes of the scene where the point pattern is projected and reconstructs them. This constitutes a very useful tool for navigation applications in indoor environments, which are mainly composed of planar surfaces.

Highlights

  • One of the most known active methods to extract 3D information from a scene is structured light [1]

  • In comparison with passive methods, which are based on the extraction of features from textured images and subsequent triangulations [2], structured light can be used with non textured images in which few features are present

  • Structured light systems are formed by a camera and a light emitter which projects a pattern on the scene [3,4,5,6]

Read more

Summary

Introduction

One of the most known active methods to extract 3D information from a scene is structured light [1]. Structured light systems are formed by a camera and a light emitter which projects a pattern on the scene [3,4,5,6]. Both Kinect and Asus Pro Live are structured light systems in a rigid configuration, since the camera and the projector are fixed and their intrinsic and extrinsic calibrations are known a priori. We use the image of the light pattern acquired by the camera and a virtual image generated from the light emitter to perform the reconstruction of the scene From this reconstruction we compute orientation and translation of the planar surfaces where the laser pattern has been projected.

Problem Definition
Camera Model
Laser Model
Homography
Scene Reconstruction
First Plane Extraction
Compute Rotation and Translation between Camera and Laser
Segmentation of Second and Subsequent Planes
Fixed-Pose Homography
Planes Reconstruction
Experiments
Simulations with Synthetic Data
Sensitivity Analysis
Reconstruction of a Simulated Scene
Real Experiments
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call