Abstract

As important services of the future sixth-generation (6G) wireless networks, vehicular communication and mobile edge computing (MEC) have received considerable interest in recent years for their significant potential applications in intelligent transportation systems. However, MEC-enabled vehicular networks depend heavily on network access and communication infrastructure, often unavailable in remote areas, making computation offloading susceptible to breaking down. To address this issue, we propose an MEC-enabled vehicular network assisted through aerial-terrestrial connectivity to provide network access and high data-rate entertainment services to a vehicular network. We present a time-varying, dynamic system model where high altitude platforms (HAPs) equipped with MEC servers, connected to a backhaul system of low-earth orbit (LEO) satellites, are used to provide computation offloading capability to the vehicles, as well as to provide network access for vehicle-to-vehicle (V2V) communications. Our main objective is to minimize the total computation and communication overhead of the joint computation offloading and resource allocation strategies for the system of vehicles. Since our formulated optimization problem is a mixed-integer non-linear programming (MINLP) problem, which is NP-hard, we propose a decentralized value-iteration-based reinforcement learning (RL) approach as a solution. In our Q-learning-assisted analysis, each vehicle acts as an intelligent agent to form optimal strategies for offloading and resource allocation. We further extend our solution to deep Q-learning (DQL) and double deep Q-learning to overcome the issues of dimensionality and the over-estimation of the value functions, as in Q-learning. Simulation results prove the effectiveness of our solution in successfully reducing system costs compared to baseline schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call