Abstract

With the development of 5G technology, the requirements for data communication and computation in emerging 5G-enabled vehicular networks are becoming increasingly stringent. Computation-intensive or delay-sensitive tasks generated by vehicles need to be processed in real time. Mobile edge computing (MEC) is an appropriate solution. Wireless users or vehicles can offload computation tasks to the MEC server due to it has strong computation ability and is closer to the wireless users or vehicles. However, the communication and computation resources of the single MEC are not sufficient for executing the continuously generated computation-intensive or delay-sensitive tasks. We consider migrating computation tasks to other MEC servers to reduce the computation and communication pressure on current MEC server. In this article, we construct an MEC-based computation offloading framework for vehicular networks, which considers time-varying channel states and stochastically arriving computation tasks. To minimize the total cost of the proposed MEC framework, which consists of the delay cost, energy computation cost, and bandwidth cost, we propose a deep reinforcement learning-based computation migration and resource allocation (RLCMRA) scheme that requires no prior knowledge. The RLCMRA algorithm can obtain the optimal offloading and migration policy by adaptive learning to maximize the average cumulative reward (minimize the total cost). Extensive numerical results show that the proposed RLCMRA algorithm can adaptively learn the optimal policy and outperform four other baseline algorithms.

Highlights

  • With the development of wireless and vehicular networks, autonomous vehicles have gradually been introduced [1]–[3]. These networks are expected to become intelligent systems that can make predictions based on the results of self-learning and make decisions using machine learning methods such as deep learning (DL) and reinforcement learning (RL) [4]–[6]

  • OPTIMAL SCHEME BASED ON DRL As discussed in Section III, our goal is to find the optimal computation offloading and task migration policy that minimizes the total cost, which is the weighted sum of the total delay, total energy consumption, and total bandwidth cost during all the time slots

  • NUMERICAL RESULTS To verify and evaluate the proposed reinforcement learning-based computation migration and resource allocation (RLCMRA) algorithm, we establish a simulation environment based on the proposed Mobile edge computing (MEC) system

Read more

Summary

Introduction

With the development of wireless and vehicular networks, autonomous vehicles have gradually been introduced [1]–[3]. These networks are expected to become intelligent systems that can make predictions based on the results of self-learning and make decisions using machine learning methods such as deep learning (DL) and reinforcement learning (RL) [4]–[6]. One of the more difficult problems in communicating with cloud servers is the backhaul delay of downlink transmission, which makes it impossible to meet the low-latency limit in vehicular networks [12], [13]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call