Abstract

Autonomous mobile robots are usually faced with challenging situations when driving in complex environments. Namely, they have to recognize the static and dynamic obstacles, plan the driving path and execute their motion. For addressing the issue of perception and path planning, in this paper, we introduce OctoPath, which is an encoder-decoder deep neural network, trained in a self-supervised manner to predict the local optimal trajectory for the ego-vehicle. Using the discretization provided by a 3D octree environment model, our approach reformulates trajectory prediction as a classification problem with a configurable resolution. During training, OctoPath minimizes the error between the predicted and the manually driven trajectories in a given training dataset. This allows us to avoid the pitfall of regression-based trajectory estimation, in which there is an infinite state space for the output trajectory points. Environment sensing is performed using a 40-channel mechanical LiDAR sensor, fused with an inertial measurement unit and wheels odometry for state estimation. The experiments are performed both in simulation and real-life, using our own developed GridSim simulator and RovisLab’s Autonomous Mobile Test Unit platform. We evaluate the predictions of OctoPath in different driving scenarios, both indoor and outdoor, while benchmarking our system against a baseline hybrid A-Star algorithm and a regression-based supervised learning method, as well as against a CNN learning-based optimal path planning method.

Highlights

  • Recent developments in the fields of deep learning and artificial intelligence have aided the autonomous driving domain’s rapid advancement

  • OctoPath was compared to the baseline hybrid A* algorithm [35], to a regressionbased approach [25], and to a convolutional neural networks (CNN) learning-based approach [21]

  • We put the OctoPath algorithm to the test in two distinct environments: (I) in the GridSim simulator [36] (More information is available at www.rovislab.com/gridsim.html, accessed on 21 May 2021.)

Read more

Summary

Introduction

Recent developments in the fields of deep learning and artificial intelligence have aided the autonomous driving domain’s rapid advancement. Autonomous vehicles (AVs) are robotic systems that can navigate without the need for human intervention. The deployment of AVs is predicted to have a major impact on the future of mobility, bringing a variety of benefits to daily life, such as making driving simpler, increasing road network capacity, and minimizing vehicle-related crashes. For Advanced Driver Assistance Systems(ADAS) systems and autonomous robot control, one of the top priorities is ensuring functional safety. When a car is driving, it encounters varieties of dynamic traffic scenarios in which the moving objects in the environment may pose a risk to safe driving. Due to the complexity of such a task, deep learning models have been used to aid in solving it. There are several conceptually different self-driving architectures, namely end2end learning [1], Deep Reinforcement Learning [2] (DRL), and the sense-plan-act pipeline [3]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call