Abstract

The research on the Vehicular ad-hoc network (VANET) has been accelerated by the 5 G technology. The software-defined network and fog nodes near the vehicles have improved the throughput and latency in the processing of requests. However, the fog nodes are limited with computational resources like memories, RAM, etc. and need to be optimally managed. The estimation of vehicles' future locations can help in the optimal offloading of vehicles' processing requests. This paper has introduced the Kalman filter prediction scheme to estimate the vehicle's next location so that the future availability of fog resources can help in the offloading decision. The deep Q network-based reinforcement learning is used to select the resources-rich fog node in VANET. The Long Term Short Memory-based Deep Q-Network optimally offloads the tasks of the fog nodes as per their available resources thus giving much better performance. The proposed Deep Q-Network algorithm is an efficient solution to offload the request optimally which improves the overall performance of the network. It is found that the average reward by proposed Deep Q-Network is 56.889% more than SARSA learning and is 44.727% more than Q learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call