Abstract

Due to the dynamic nature of a vehicular fog computing environment, efficient real-time resource allocation in an internet of vehicles (IoV) network without affecting the quality of service of any of the on-board vehicles can be challenging. This paper proposes a priority-sensitive task offloading and resource allocation scheme in an IoV network, where vehicles periodically exchange beacon messages to inquire about available services and other important information necessary for making the offloading decisions. In the proposed methodology, the vehicles are stimulated to share their idle computation resources with the task vehicles, whereby a deep reinforcement learning algorithm based on soft actor-critic (SAC) is designed to classify the tasks based on priority and computation size of each task for optimally allocating the power. In particular, the SAC algorithm works towards achieving the optimal policy for task offloading by maximizing the mean utility of the considered network. Extensive numerical results along with a comparison with other baseline algorithms, namely greedy and deep deterministic policy gradient algorithms are presented to validate the feasibility of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call