Abstract

Path planning is vital in autonomous vehicle technology, from robots to self-driving cars and driverless trucks, it is impossible to navigate without a proper path planning algorithm, various algorithms exist Q-learning being one of them. Q-learning is used extensively in discrete applications as it is effective in finding solutions to these problems. This research investigates the possibility of using Q-learning for solving the local path planning problem with obstacle avoidance. Q-learning is split into two phases, the first being the training phase, and the second being the application phase. During training, Q-learning requires exponentially increasing training time based on the system’s state space. However, when Q-learning is applied it becomes as simple as a lookup table which allows it to run on even the simplest microcontrollers. Two simulations are conducted with varying environments. One to showcase the ability to learn the optimal path, the other to showcase the ability for learning navigation in variable environments. The first simulation was run on a static environment with one obstacle, with enough training episodes, Q-learning could solve the path planning problem with minimal movement steps. The second simulation focuses on a randomized environment, obstacles and the agent’s starting position are randomly chosen at the start of every episode. During testing, Q-learning was able to find a path to the target when a path did exist, as It was possible in certain configurations for the vehicle to be stuck in between obstacles with no feasible path or solution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call