Abstract

An autonomous mobile robot that navigates in outdoor environments requires functional and decisional routines enabling it to supervise the estimation and the performance of all its movements for carrying out an envisaged trajectory. At this end, a robot is usually equipped with several high-performance sensors. However, we are often interested in less complex and low-cost sensors that could provide enough information to detect in real-time when the trajectory is free of dynamic obstacles. In this context, our strategy was focused on visual sensors, particulary on stereo vision since this provides the depth coordinate for allowing a better perception of the environment. Visual perception for robot mobile navigation is a complex function that requires the presence of salience or evident patrons to identify something that breaks the continuous tendency of data. Usually, interesting points or segments are used for evaluating patrons in position, velocity, appearance or other characteristics that allows us forming groups (Lookingbill et al., 2007), (Talukder & Matthies, 2004). Whereas complete feature vectors are more expressive for explaining objects, here we use 3D feature points for proposing a strategy computationally less demanding conserving the main objective of the work: detect and track moving objects in real time. This chapter presents a strategy for detecting and tracking dynamic objects using a stereo-vision systemmounted on a mobile robot. First, a set of interesting points are extracted from the left image. A disparity map, provided by a real-time stereo vision algorithm implemented on FPGA, gives the 3D position of each point. In addition, velocity magnitude and orientation are obtained to characterize the set of points on the space R6. Groups of dynamic 2D points are formed using the a contrario clustering technique in the 4D space and then evaluated on their depth value yielding groups of dynamic 3D-points. Each one of these groups is initialized by a convex contour with the velocity and orientation of the points given a first estimation of the dynamic object position and velocity. Then an active contour defines a more detailed silhouette of the object based on the intensity and depth value inside of the contour. It is well known that active contour techniques require a highly dense computations. Therefore, in order to reduce the time of processing a fixed number of iterations is used at each frame, so the convergence of the object real limits will be incrementally achieved along

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call