Abstract

Visual perception for walking machines needs to handle more degrees of freedom than for wheeled robots. For humanoid, four- or six-legged robots, camera motion is 6D instead of 3D or planar motion. Classical 3D reconstruction methods cannot be applied directly, because explicit sensor motion is needed. In this paper, we propose an algorithm for 3D reconstruction of an unstructured environment using motion-free uncalibrated single camera. Computer vision techniques are employed to obtain an incremental geometrical reconstruction of the environment, therefore using vision as a sensor for robot control tasks like navigation, obstacle avoidance, manipulation, tracking, etc. and 3D model acquisition. The main contribution is that the offline 3D reconstruction problem is considered as a point trajectory search through the video stream. The algorithm takes into account the temporal aspect of the sequence of images in order to have an analytical expression of the geometrical locus of the point trajectories through the sequence of images. The approach is a generalization of the Desargues theorem applied to multiple views taken from nearby viewpoints. Experiments on both synthetic and real image sequences show the simplicity and efficiency of the proposed method. This method provides an alternative technical solution easy to use, flexible in the context of robotic applications and can significantly improve the 3D estimation accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call