Abstract

Reinforcement learning (RL) is a booming area in artificial intelligence. The applications of RL are endless nowadays, ranging from fields such as medicine or finance to manufacturing or the gaming industry. Although multiple works argue that RL can be key to a great part of intelligent vehicle control related problems, there are many practical problems that need to be addressed, such as safety related problems that can result from non-optimal training in RL. For instance, for an RL agent to be effective it should first cover all the situations during training that it may face later. This is often difficult when applied to the real-world. In this work we investigate the impact of RL applied to the context of intelligent vehicle control. We analyse the implications of RL in path planning tasks and we discuss two possible approaches to overcome the gap between the theorical developments of RL and its practical applications. Specifically, firstly this paper discusses the role of Curriculum Learning (CL) to structure the learning process of intelligent vehicle control in a gradual way. The results show how CL can play an important role in training agents in such context. Secondly, we discuss a method of transferring RL policies from simulation to reality in order to make the agent experience situations in simulation, so it knows how to react to them in reality. For that, we use Arduino Yún controlled robots as our platforms. The results enhance the effectiveness of the presented approach and show how RL policies can be transferred from simulation to reality even when the platforms are resource limited.

Highlights

  • Reinforcement learning has been well studied in the recent past as it is considered one of the most prominent paradigms in machine learning [1,2]

  • We investigate the impact of the environmental complexity in the learning process of Reinforcement learning (RL) tasks involving path planning scenarios; We discuss a method for transferring RL policies from the simulation domain to the real-world domain, supported by empirical evidence and a working algorithm for the discussed method; We show how Curriculum Learning (CL) can be applied within the context of intelligent vehicle control in tasks involving multiple agents

  • The experiments in this paper are three-fold: first, we use Q-learning in a set of different environments to analyse the impact of the environmental complexity in the learning process of the RL agent in path planning tasks for intelligent vehicle control

Read more

Summary

Introduction

Reinforcement learning has been well studied in the recent past as it is considered one of the most prominent paradigms in machine learning [1,2]. As one of the first applications of RL in real robots, in [11] the authors present a robot that learns how to push boxes, trained using RL. Works such as [12,13] propose new methods that use RL in mobile robots and autonomous vehicles to improve on-site learning. To overcome one of the limitations of using Q-learning for path planning tasks—the impracticality of storing the Q-table for all states—[34] proposes a real-time Q-learning approach that avoids storing tables in advance. Other algorithms such as Deep Q-Networks (DQNs) [5] have been proposed to overcome similar limitations

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call