Abstract

This paper introduces a novel self-localization algorithm for mobile robots, which recovers the robot position and orientation from a single image of identified landmarks taken by an onboard camera. The visual angle between two landmarks can be derived from their projections in the same image. The distances between the optical center and the landmarks can be calculated from the visual angles and the known landmark positions based on the law of cosine. The robot position can be determined using the principle of trilateration. The robot orientation is then computed from the robot position, landmark positions and their projections. Extensive simulation has been carried out. A comprehensive error analysis provides the insight on how to improve the localization accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call