Abstract

Autonomous mobile vehicles need advanced systems to determine their exact position in a certain coordinate system. For this purpose, the GPS and the vision system are the most often used. These systems have some disadvantages, for example, the GPS signal is unavailable in rooms and may be inaccurate, while the vision system is strongly dependent on the intensity of the recorded light. This paper assumes that the primary system for determining the position of the vehicle is wheel odometry joined with an IMU (Internal Measurement Unit) sensor, which task is to calculate all changes in the robot orientations, such as yaw rate. However, using only the results coming from the wheels system provides additive measurement error, which is most often the result of the wheels slippage and the IMU sensor drift. In the presented work, this error is reduced by using a vision system that constantly measures vehicle distances to markers located in its space. Additionally, the paper describes the fusion of signals from the vision system and the wheels odometry. Studies related to the positioning accuracy of the vehicle with both the vision system turned on and off are presented. The laboratory averaged positioning accuracy result was reduced from 0.32 m to 0.13 m, with ensuring that the vehicle wheels did not experience slippage. The paper also describes the performance of the system during a real track driven, where the assumption was not to use the GPS geolocation system. In this case, the vision system assisted in the vehicle positioning and an accuracy of 0.2 m was achieved at the control points.

Highlights

  • The article describes the implementation of algorithms to determine the position and orientation of the mobile robot in a certain coordinate system without the participation of the GPS positioning system

  • The paper describes the automatic driving system of a mobile vehicle, which was equipped with two independent systems based on which the vehicle’s position on the test track can be determined

  • The first of the implemented systems calculated the robot positions from the robot wheels. This is the main system of the robot positioning in space, but its main disadvantage is the fact that the larger the distance travelled by the robot, the more positioning error increases

Read more

Summary

Introduction

The article describes the implementation of algorithms to determine the position and orientation of the mobile robot in a certain coordinate system without the participation of the GPS positioning system. The first subsystem consists of programs located in the robot control system and determines the robot’s trajectory calculated from the vehicle wheels. This is called wheels odometry and it is widely used in the automotive industry. Current research indicates that wheels odometry are crucial for robot self-localization [6] Such an approach may be implemented using the camera and IMU and wheel encoder [7]. The vision system described by the authors aims to remove positioning errors coming from other sensors of the robot. This error is often the cause of slippage.

Determining Vehicle Position
Position from Wheels and IMU
Position from ARTags
Edge Detection—Canny Algorithm
Contour Detection and Polygonal Approximation
Rejecting Incorrect Markers
Removing Perspective and Warping Algorithms
Detecting Marker Frame and Reading Its Code
Calculation of the Position and Orientation of the Marker
Fusion of Wheels and Vision Signals
Test in the Laboratory
Test in the Real Environment
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call