Abstract

Intelligent Transportation Systems (ITS) are gaining substantial attention owing to the great benefits offered to the vehicle users. In ITS paradigm, content data is normally obtained from road side units (RSUs). However, in some scenarios, terrestrial networks are partially/temporarily out-of-service. Unmanned Aerial Vehicle (UAV) or drone cells are expected to be one of the pillars of future networks to assist the vehicular networks in such scenarios. To this end, we propose a collaborative framework between UAVs and in-service RSUs to partial service vehicles. Our objective is to maximize the amount of downloaded contents to vehicles while considering the dynamic nature of the network. Motivated by the success of machine learning (ML) techniques particularly deep Reinforcement learning in solving complex problems, we formulate the scheduling and content management policy problem as a Markov Decision Process (MDP) where the system state space considers the vehicular network dynamics. Proximal Policy Optimization (PPO) is utilized to govern the content decisions in the vehicular network. The simulation-based results show that during the mission time, the proposed algorithm learns the vehicular environment and its dynamics to handle the complex action space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call