Abstract

Due to the dynamic nature of a vehicular fog computing environment, efficient real-time resource allocation in an Internet of Vehicles (IoV) network without affecting the quality of service of any of the onboard vehicles can be challenging. This article proposes a priority-sensitive task offloading and resource allocation scheme in an IoV network, where vehicles periodically exchange beacon messages to inquire about available services and other important information necessary for making the offloading decisions. In the proposed methodology, the vehicles are stimulated to share their idle computation resources with the task vehicles, whereby a deep reinforcement learning algorithm based on soft actor–critic is designed to classify the tasks based on priority and computation size of each task for optimally allocating the power. Furthermore, we also design deep deterministic policy gradient (DDPG) and twin delayed DDPG (TD3) algorithms for the considered framework. In particular, the algorithms work toward achieving the optimal policy for task offloading by maximizing the mean utility of the considered network. Extensive numerical results under different network conditions, along with comparison among the three algorithms, are presented to validate the feasibility of distributed reinforcement learning for task offloading in future IoV networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call