Abstract
Roundabouts provide safe and fast circulation as well as many environmental advantages, but drivers adopting unsafe behaviours while circulating through them may cause safety issues, provoking accidents. In this paper we propose a way of training an autonomous vehicle in order to behave in a human and safe way when entering a roundabout. By placing a number of cameras in our vehicle and processing their video feeds through a series of algorithms, including Machine Learning, we can build a representation of the state of the surrounding environment. Then, we use another set of Deep Learning algorithms to analyze the data and determine the safest way of circulating through a roundabout given the current state of the environment, including nearby vehicles with their estimated positions, speeds and accelerations. By watching multiple attempts of a human entering a roundabout with both safe and unsafe behaviours, our second set of algorithms can learn to mimic the human’s good attempts and act in the same way as him, which is key to a safe implementation of autonomous vehicles. This work details the series of steps that we took, from building the representation of our environment to acting according to it in order to attain safe entry into single lane roundabouts.
Highlights
Computer Vision is a field that is undergoing a tremendous evolution
In order to achieve that, we use the footage from a single camera installed in our vehicle instead of relying on LIDAR (Light Detection and Ranging) data, which would be a more traditional approach
LIDAR is a surveying method that used a series of laser beams to measure the distance to a target by tracking the time the light takes to bounce back to the sensor
Summary
Computer Vision is a field that is undergoing a tremendous evolution. Over the last few years Artificial Neural Networks ( on will be referred as ANN or NN) are being used to process images due to their high accuracy and fast performance, outperforming more traditional approaches in benchmarks, while being relatively easy to implement and reusable. LIDAR is a surveying method that used a series of laser beams to measure the distance to a target by tracking the time the light takes to bounce back to the sensor Having one of these sensors on top of our vehicle spinning multiple times per second (normally at a rate of 20 spins per second), creates a 3D-scan of the environment in real time. Traditional cameras are very common type of sensor, so being able to rely on its data and leaving aside the expensive LIDAR, will enable future investigation and implantation in autonomous vehicles to become much more approachable Doing this task should be feasible as a human subject is capable of performing the same task using just one eye, relying on years of experience perceiving environments. All this work has been done using Carla Simulator [1] for the testing environment and Keras as the Deep Learning Framework
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have