Abstract

RoboCup (http://www.robocup.org) is an international worldwide initiative that aims to promote research and development in mobile robotics and related areas. Robotic soccer is one of the proposed problems since it represents a challenge of high complexity, in which fully autonomous robots cooperate in order to achieve a common goal (win the game). Within Robocup soccer competitions, the Middle-Size League proposes a challenge where two teams of five fast robots, measuring up to 80cm and weighting up to 40Kg, play soccer in a 18x12m field in a semi-structured, highly dynamic environment. This challenge requires a real time perception of the overall environment in order to allow self localization, mate and opponent localization and, of course, determination of the ball position and movement vector. This, in practice, determines that adopting an omni-directional vision system, as the main sensorial element of the robot, although not mandatory, has significant advantages over other solutions such as standard panoramic vision systems. A common solution found in robots from most teams of this competition, as well as in robots for other autonomous mobile robot applications, is based on a catadioptric omni-directional vision system composed of a regular video camera pointed at a hyperbolic mirror – or any other mirror obtained from a solid of revolution (e.g. ellipsoidal convex mirror). This is the case, just to name a few, of those teams described in (Zivkovic & Booij, 2006), (Wolf, 2003), (Menegatti et al, 2001, 2004) and (Lima et al, 2001). This type of setup ensures an integrated perception of all major target objects in the robots surrounding area, allowing a higher degree of maneuverability at the cost of higher resolution degradation with growing distances away from the robot (Baker & Nayar, 1999) when compared to non-isotropic setups. For most practical applications, as is the case of the RoboCup competition, this setup requires the translation of the planar field of view, at the camera sensor plane, into real world coordinates at the ground plane, using the robot as the center of this system. In order to simplify this non-linear transformation, most practical solutions adopted in real robots choose to create a mechanical geometric setup that ensures a symmetrical solution for the problem by means of single viewpoint (SVP) approach (Zivkovic & Booij, 2006), (Wolf, 2003) and (Lima et al, 2001). This, on the other hand, calls for a precise alignment of the four major points comprising the vision setup: the mirror focus, the mirror apex, the lens focus and the center of the image sensor. Furthermore, it also 17

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call