Abstract

The vehicular network is taking great attention from both academia and industry to enable the intelligent transportation system (ITS), autonomous driving, and smart cities. The system provides extremely dynamic features due to the fast mobile characteristics. While the number of different applications in the vehicular network is growing fast, the quality of service (QoS) in the 5G vehicular network becomes diverse. One of the most stringent requirements in the vehicular network is a safety‐critical real‐time system. To guarantee low‐latency and other diverse QoS requirements, wireless network resources should be effectively utilized and allocated among vehicles, such as computation power in cloud, fog, and edge servers; spectrum at roadside units (RSUs); and base stations (BSs). Historically, optimization problems have mostly been investigated to formulate resource allocation and are solved by mathematical computation methods. However, the optimization problems are usually nonconvex and hard to be solved. Recently, machine learning (ML) is a powerful technique to cope with the complexity in computation and has capability to cope with big data and data analysis in the heterogeneous vehicular network. In this paper, an overview of resource allocation in the 5G vehicular network is represented with the support of traditional optimization and advanced ML approaches, especially a deep reinforcement learning (DRL) method. In addition, a federated deep reinforcement learning‐ (FDRL‐) based vehicular communication is proposed. The challenges, open issues, and future research directions for 5G and toward 6G vehicular networks, are discussed. A multiaccess edge computing assisted by network slicing and a distributed federated learning (FL) technique is analyzed. A FDRL‐based UAV‐assisted vehicular communication is discussed to point out the future research directions for the networks.

Highlights

  • The 5G new radio (NR) is driven by the demand for the large volume of data due to the burst growth of cellular mobile devices and vehicles [1]

  • Even though the recent advancements in the amalgamation of prominent technologies and future research direction are given, resource management in both cloud, fog, and edge computing based on deep reinforcement learning (DRL) and federated deep reinforcement learning (FDRL) and assisted by unmanned aerial vehicles (UAVs) to reduce latency for the vehicular network is not discussed in the survey

  • The recent advanced techniques of the conventional optimization theory, machine learning (ML), especially deep reinforcement learning- (DRL-)based resource managements, are reviewed. These techniques are discussed in cloud, fog, and edge layers to guarantee diverse quality of services (QoS) requirements in the 5G vehicular network

Read more

Summary

Introduction

The 5G new radio (NR) is driven by the demand for the large volume of data due to the burst growth of cellular mobile devices and vehicles [1]. Even though the recent advancements in the amalgamation of prominent technologies and future research direction are given, resource management in both cloud, fog, and edge computing based on DRL and FDRL and assisted by UAVs to reduce latency for the vehicular network is not discussed in the survey. Since the comprehensive overview of computation power in cloud, fog, and edge servers and spectrum allocation supported by ML, especially DRL, and FDRL algorithms, and assisted by UAVs in 5G and toward 6G vehicular networks has not received much attention In this comprehensive survey, a DRL-based computation power and resource allocation to guarantee the diverse QoS requirements is given.

Background of Deep Reinforcement Learning
QoS Requirements in the 5G and 6G Vehicular Networks
FDRL-Based UAV-Assisted Vehicular Communication
Federated Deep Reinforcement Learning-Based Vehicular Network
Þ: ð10Þ
2: Local vehicles
FDRL-Based UAV-Assisted Vehicular Network
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.