Abstract

The high performance requirement of task processing and low energy consumption requirement of network operation result in a contradiction for the Internet of Vehicles (IoV). This paper addresses the contradiction and proposes an efficient edge-terminal collaboration scheme to sustain the diverse task requirements for a green IoV network. Specifically, two types of tasks are first modeled in terms of data size and delay requirements and then an energy minimization problem was formulated under the diverse task requirements. Thereafter, a multi-agent reinforcement learning framework is constructed, based on which the time-varying environment of the IoV can be modeled as a partially observable Markov decision process. Furthermore, a multi-agent soft actor-critic (MASAC) scheme is proposed to train the edge-terminal collaboration policy in terms of task offloading, spectrum sharing as well as vehicle power control. Finally, the effectiveness of the proposed scheme is validated by comparing with other baseline schemes through extensive simulation experiments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.