Abstract

Channel allocation has a direct and profound impact on the performance of vehicle-to-everything (V2X) networks. Considering the dynamic nature of vehicular environments, it is appealing to devise a blended strategy to perform effective resource sharing. In this paper, we exploit deep learning techniques predict vehicles’ mobility patterns. Then we propose an architecture consisting of centralized decision making and distributed channel allocation to maximize the spectrum efficiency of all vehicles involved. To achieve this, we leverage two deep reinforcement learning techniques, namely deep Q-network (DQN) and advantage actor-critic (A2C) techniques. In addition, given the time varying nature of the user mobility, we further incorporate the long short-term memory (LSTM) into DQN and A2C techniques. The combined system tracks user mobility, varying demands and channel conditions and adapt resource allocation dynamically. We verify the performance of the proposed methods through extensive simulations and prove the effectiveness of the proposed LSTM-DQN and LSTM-A2C algorithms using real data obtained from California state transportation department.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call