Abstract
Control and coordination of agents in multi-agent systems is demanding and complex task. Applications like search and rescue Jennings et al. (1997), mapping of unknown environments or just simple moving in formation demands gathering a lot of information from surrounding. A local sensor-based information is preferred in large formations due to observability constraints. On the other hand, local information processing requires sophisticated and expensive hardware that wouldmanage to gather and process sensor information in real-time. Furthermore, refined sensors, such as a camera, entail time consuming algorithms. Generally, it is difficult to provide satisfactory results by implementation of such algorithms in real-time. For this reason immense research efforts have been put in development of fast and simple image analysis methods. This is specifically noticeable in the field of robot vision, a discipline that strives to solve problems such as robot localization and tracking, formation control, obstacle avoidance, grasping and so on. Visual tracking of robots in formation is usually based on visual markers. Algorithms for detection of predefined shapes or colors are simple enough to be executed even on low-cost embedded computers in real-time. In Chiem & Cervera (2004) pose estimation based on the tracking of color regions attached to the robot is presented. The position of the leader robot is estimated at video rate of 25 frames per second. The main disadvantage of the proposed method is marker position on the robot, i.e. a marker can be recognized only from particular angle. Authors in Cruz et al. (2007) accomplish robot identification and localization by using visual tags arranged on the back of each robot on a 3D-truncated octagonal-shaped structure. Each face of visual tag has a code that provides the vehicle’s ID as well as the position of the face in the 3D-visual marker. This information allows a vision sensor to identify the vehicle and estimate its pose relative to the sensor coordinate system. A robot formation control strategy based on visual pose estimation is presented in Renaud et al. (2004). Robots visual perception is enhanced by the control of a motorized zoom, which gives the follower robot a large field of view and improves leader detection. A position-based visual servo control strategy for leader-follower formation control of unmanned ground vehicles is proposed in Dani et al. (2009). The relative pose and the relative velocity are obtained using a geometric pose estimation technique and a nonlinear velocity estimation strategy. The geometric pose estimation technique Gans (2008)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.