Abstract

With the advancement of technology and its cheapness, robotic vehicles have gained a large number of applications. The spread of their use is growing also because they are getting smaller, lighter and easier to build. In this paper we present a simple and effective way to map a road network with the help of a driverless vehicle. Our approach consists of only three parts: vision-segmentation, angle variation and travelled distance. A video camera attached to a Lego<sup>®</sup> NXT Mindstorm vehicle guides it by image segmentation using Matlab<sup>®</sup> Image processing toolbox, along a road network, in which is represented by black tape over a white floor. The algorithm makes the vehicle travel all over the road memorizing main coordinates to identify all crossroads by keeping track of the travelled distance and the current angle. The crossroads and road’s end are the nodes of the graph. After several simulations have been performed, the modelling proved to be successful in that small scale approach. Consequently, there are good chances that driverless cars and UAVs also make use of the strategies to map route networks accordingly. The algorithm presented in this paper is useful when there is no localization signal such as GPS, for example, navigation on water, tunnels, inside buildings, among others. <strong>Faculty Sponsor</strong>: Francisco de Assis Zampirolli

Highlights

  • Vision-based systems consist of a sequence of digital images that enable the detection and the identification of objects

  • Automation is done by an unmanned vehicle that travels across the network and is guided only through image measurements

  • The adjacency matrixes used in the experiments had considered graphs of up to 10 nodes, the higher the number of "crossroad" in the road network, the bigger must be the adjacency matrix

Read more

Summary

Introduction

Vision-based systems consist of a sequence of digital images that enable the detection and the identification of objects. Several techniques to endow vehicles with vision-based control can be found in (Güzel, 2013; Bertozzi & Broggi, 1997; Tisdale, et al, 2009; Carloni, et al, 2013; Yunji, et al, 2013; Ivancsits & Lee, 2013; Lim, et al, 2012; Mahony, et al, 2012; Guillet, et al, 2014; Mcgill & Taylor, 2011). They enable object detection, path orientation and distance control. GPS (Global Positioning System) receives satellite signals with a considerable margin of error, besides the fact that such signals are sometimes blocked, for example, under the water, inside tunnels or inside buildings

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call