Abstract

Due to the nature of high mobility and dynamic network topology, Intrusion Detection Systems (IDSs) in Vehicular Ad-hoc Networks (VANETs) face lots of challenges, especially in balancing the accuracy and efficiency of detection. Current researches about the deployment of IDSs in VANETs mainly focus on a tradeoff between the effectiveness and efficiency, but few efforts have been done about the adaptability of the tradeoff in the changeable networks. Thus, we address two crucial problems: 1) how to perceive the environmental change in the perspective of an IDS? 2) how to make the IDS adaptive in different scenarios? In this paper, a Bayesian Game theory and Deep Q-learning Network-based IDS is proposed for VANETs, called GaDQN-IDS. The interactions between an IDS and attackers are formulated as a dynamic intrusion detection game, in which the IDS decides either to just adjust the tradeoff between the accuracy and efficiency or to be retrained completely when its detection capacity has declined. The Nash Equilibria (NE) of the game is derived to reveal how the optimal decision of the IDS depends on the detection performance and road conditions. Moreover, a Deep Q-learning Network (DQN)-Adjustment is proposed to realize the self-adaptation of the IDS in the dynamic game, while an Error Priority Learning (EPL) is further designed for IDS retraining in changing VANETs. Simulation results show that the GaDQN-IDS has better performance than other existing IDSs with higher detection rate as well as lower detection time and overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call