Abstract

5G vehicle-to-everything (V2X) connectivity is crucial to enable future complex vehicular networking environment for enabling intelligent transportation systems (ITS). But, for mission critical applications like safety applications, unreliable vehicle-to-vehicle (V2V) connections and heavy signaling overheads in centralized resource distribution methods are becoming key obstacles. This work discusses the popular optimization issue of the selection of transmission mode and the allocation of resources blocks for 5G-V2X communication scenario. The stated problem is conceived as a Markov decision-making mechanism, and a Decentralized Deep reinforcement Learning (DRL) algorithm is presented to optimize the aggregate potential in terms of channel capacity of vehicle-to-infrastructure users while fulfilling the latency and reliability constraints of V2V communication link sets. In addition, considering training limitation of local DRL models, a two-timed synchronous federated DRL algorithm is used for making system robust. Therefore, we use two-timescale asynchronous federated deep reinforcement learning algorithm, i.e. Large-scale model and small- scale model. In large-scale model a graph-centered vehicle clustering is done to form the cluster of neighboring vehicles on a large timescale, whereas in small timescale model the vehicles in the similar cluster are used to train using robust global asynchronous federated deep reinforcement learning algorithm. The effects of outage threshold and vehicular density with respect to the network performance is presented. The simulation findings have shown that the proposed work outperforms the other previous state of the art works. The overall preeminence and convergence of the proposed work is verified.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call