Abstract

A six-legged robot system demonstrating reactive behaviors of simple organisms—walking between two walls, steering around obstacles, making turns at corners, and making U-turns at pathway dead-ends—is described. The system, named VisionBug, uses no active range sensor but only a stereo pair of cameras for sensing the surroundings. The system assumes the surroundings to be consisting of mostly a ground surface, although the surface could have a varying geometric relationship with the robot due to the jiggling nature of legged motion. By the use of the image-to-image mapping induced to the stereo images by the ground surface, the system is able to avoid explicit 3D reconstruction and the use of optical flow altogether in locating through-ways. Specifically, it regards image features not respecting the ground-induced mapping as obstacles, and express the obstacles in terms of a 2D distribution on the ground. Based on the distribution information, which is further time-delay compensated, a simple fuzzy control mechanism is used to command the legged motion. Experiments show that the system is effective for demonstrating the above-mentioned behaviors in textured environment, at a speed fast enough for many applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call