Abstract
Vehicular edge computing (VEC) effectively reduces the computing load of vehicles by offloading computing tasks from vehicle terminals to edge servers. However, offloading of tasks increase in quantity the transmission time and energy of the network. In order to reduce the computing load of edge servers and improve the system response, a shared offloading strategy based on deep reinforcement learning is proposed for the complex computing environment of Internet of Vehicles (IoVs). The shared offloading strategy exploits the commonality of vehicles task requests, similar computing tasks coming from different vehicles can share the computing results of former task submitted. The shared offloading strategy can be adapted to the complex scenarios of the IoVs. Each vehicle can share the offloading conditions of the VEC servers, and then adaptively select three computing modes: local execution, task offloading, and shared offloading. In this article, the network state and offloading strategy space are the input of the deep reinforcement learning (DRL). Through the DRL, each task unit selects the offloading strategy with the optimal energy consumption at each time period in the dynamic IoVs transmission and computing environment. Compared with the existing proposals and DRL-based algorithms, it can effectively reduce the delay and energy consumption required for tasks offloading.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.