Abstract

Autonomous visual navigation, i.e. determination of position, attitude and velocity (ego motion) by processing of the images from onboard camera(s), is essential for mobile robots control even in the presence of GPS networks, as the accuracy of GPS data and/or the available map of surroundings can be insufficient. Besides, GPS signals reception can be unstable in many locations (inside buildings, tunnels, in narrow streets, canyons, under trees, etc). Up to now most of the practical visual navigation solutions have been developed for ground robots moving in cooperative and/or well determined environment. However, future generations of mobile robots should be also capable of operating in complex and noncooperative 3D environments. Visual navigation in such conditions is much more challenging, especially for flying robots, where full 6DOF pose/motion should be determined. Generally 3D environment perception is required in this case, i.e., determination of a local depth map for the visible scene. 3D scene information can be obtained by stereo imaging; however this solution has certain limitations. It requires at least two cameras, precisely mounted with a certain stereo base (can be critical for small vehicles). Due to fixed stereo base the range of the depth determination with stereo imaging is limited. A more universal solution with less hardware requirements can be achieved with optical flow processing of sequential images from a single onboard camera. The ego motion of a camera rigidly mounted on a vehicle is mapped into the motion of image pixels in the camera focal plane. This image motion is commonly understood as image flow or optical flow (OF) (Horn & Schunck, 1981). This vector field of 2D image motion can be used efficiently for 3D environment perception (mapping) and vehicle pose/motion determination as well as for obstacle avoidance or visual servoing. The big challenge for using the optical flow in real applications is its computability in terms of its density (sparse vs. dense optical flow), accuracy, robustness to dark and noisy images and its real-time determination. The general problem of optical flow determination can be formulated as the extraction of the two-dimensional projection of the 3D relative motion into the image plane in form of a field of correspondences (motion vectors) between points in consecutive image frames.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call