Abstract

The erratic environment of Vehicular ad-hoc network (VANET), specifically in challenged areas like military or battle fields, may lose the data while forwarding certain important multimedia files. As we know Vehicular Delay Tolerant Network (VDTNs) lack permanent grid power connections so to optimize the use of the power of nodes, reinforcement learning techniques to can be applied to the base stations/roadside units (RSUs). The RSUs learn to pick a vehicle based on information gathered about traffic characteristics, budget for infrastructure resources, and the total duration of a contact time. Based on the cumulative knowledge a compensation metric is evaluated. This acts as a performance metric to determine which vehicular node to select for message forwarding. To make decisions based on observations made by RSUs analyses by applying deep reinforcement learning augments. This saves the power of the nodes. This novel energy efficient technique by deploying secure, intelligent mechanisms in the vehicular nodes is proposed, which significantly improves the processing of files in the challenges networks. For simulation purpose, a real-time data set is taken from the crawdad website. Experimental results show that after applying reinforcement technique, there is an increase in 28% of message delivery, decrease in latency of 16% and reduction of 21% in energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call