Abstract

In Vehicular Internet of Things (VIoTs) a vehicle with shortage of computation resource can offload its task to other vehicles or edge server with surplus resources. However, the dynamic environment of VIoT causes an insurmountable situation, in which the task offloading requirements cannot be guaranteed. Therefore, in this paper we proposed a method named Vehicular Internet of Things Task Offloading (VIoT-TO) that divides the network to a cellular structure and by applying a reinforcement learning approach, learn how and where to find a near and idle task server in the area of network. In our method, computation resources from both edge servers and peer vehicles were utilized to perform a task. The reward parameter in learning algorithm was tuned to fairly distribute the load between different task servers. Furthermore, the setting of reward parameters leads to preference for short distance task server over the long haul. By doing this, the task offloading delay can be manageable and consequently the tasks can be performed in appropriate time. To solve the applied reinforcement learning approach, a Q-learning algorithm was utilized and parameters of Markov decision process were determined. Finally, we evaluated our method and compared it against rival methods. The evaluation revealed the superiority of proposed method in term of task offloading delay, load balancing index, and task completion ratio.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call