Abstract
<p> </p> <p>Channel allocation has a direct and profound impact on the performance of vehicle-to-everything (V2X) networks. Considering the dynamic nature of vehicular environments, it is appealing to devise a blended strategy to perform effective resource sharing. In this paper, we exploit deep learning techniques predict vehicles' mobility patterns. Then we propose an architecture consisting of centralized decision making and distributed channel allocation to maximize the spectrum efficiency of all vehicles involved. To achieve this, we leverage two deep reinforcement learning techniques, namely deep Q-network (DQN) and advantage actor-critic (A2C) techniques. In addition, given the time varying nature of the user mobility, we further incorporate the long short-term memory (LSTM) into DQN and A2C techniques. The combined system tracks user mobility, varying demands and channel conditions and adapt resource allocation dynamically. We verify the performance of the proposed methods through extensive simulations and prove the effectiveness of the proposed LSTM-DQN and LSTM-A2C algorithms using real data obtained from California state transportation department.</p> <p> </p>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.