Abstract

With the explosive growth of the computation intensive vehicular applications, the demand for computational resource in vehicular networks has increased dramatically. However some vehicular networks may be deployed in an environment that lack resource-rich facilities to support computationally expensive vehicular applications. In this work we propose a new scheme that enables computational resource sharing among vehicles in vehicular cloud network (VCN), which can be formulated as a complex multi-knapsack problem. In order to solve it, a deep reinforcement learning (DRL) algorithm is developed. Considering the non-stationary behavior brought in by the parallel learning and exploring processes among vehicles, computational resource sharing in such a vehicular network is a typical multiagent problem, therefore we model the problem with a Markov game problem. In addition, to tackle the heterogeneity property of the computational resources, a multi-hot encoding scheme is designed to standardize the action space in DRL. Furthermore, we propose a centralized training and decentralized execution framework that can be solved by a multi-agent deep deterministic policy gradient (MADDPG) algorithm. The numerical simulation results demonstrate the effectiveness of the proposed scheme.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.