Abstract

Vehicular ad-hoc networks (VANETs) is drawing more and more attentions in intelligent transportation system to reduce road accidents and assist safe driving. However, due to the high mobility and uneven distribution of vehicles in VANETs, multi-hops communication between the vehicles is still particularly challenging. Considering the distinctive characteristics of VANETs, in this paper, an adaptive routing protocol based on reinforcement learning (ARPRL) is proposed. Through distributed Q-Learning algorithm, ARPRL constantly learns and obtains the fresh network link status proactively with the periodic HELLO packets in the form of Q table update. Therefore, ARPRL's dynamic adaptability to network changes is improved. Novel Q value update functions which take into account the vehicle mobility related information are designed to reinforce the Q values of wireless links by exchange of HELLO packets between neighbor vehicles. In order to avoid the routing loops caused in Q learning process, the HELLO packet structure is redesigned. In addition, reactive routing probe strategy is applied in the process of learning to speed up the convergence of Q learning. Finally, the feedback from the MAC layer is used to further improve the adaptation of Q learning to the VANETs environment. Through simulation experiment result, we show that ARPRL performs better than existing protocols in the form of average packet delivery ratio, end-to-end delay and number hops of route path while network overhead remains within acceptable ranges.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call