Abstract

Nowadays, with the proliferation of internet of things-connected devices, the scope of cyber-attacks on the internet of things has grown exponentially. So, it makes it a necessity to develop an efficient and accurate intrusion detection system that should be fast, dynamic, and scalable in an internet of things environment. On the other hand, Fog computing is a decentralized platform that extends Cloud computing to deal with the inherent issues of the Cloud computing. As well, maintaining a high level of security is critical in order to ensure secure and reliable communication between Fog nodes and internet of things devices. To address this issue, we present an intrusion detection method based on artificial neural networks and genetic algorithms to efficiently detect various types of network intrusions on local Fog nodes. Through this approach, we applied genetic algorithms to optimize the interconnecting weights of the network and the biases associated with each neuron. Therefore, it can quickly and effectively establish a back-propagation neural network model. Moreover, the distributed architecture of fog computing enables the distribution of the intrusion detection system over local Fog nodes with a centralized Cloud, which achieves faster attack detection than the Cloud intrusion detection mechanism. A set of experiments were conducted on the Raspberry Pi4 as a Fog node, based on the UNSW-NB15 and ToN_IoT data sets for binary-class classification, which showed that the optimized weights and biases achieved better performance than those who used the neural network without optimization. The optimized model showed interoperability, flexibility, and scalability. Furthermore, achieving a higher intrusion detection rate through decreasing the neural network error rate and increasing the true positive rate is also possible. According to the experiments, the suggested approach produces better outcomes in terms of detection accuracy and processing time. In this case, the proposed approach achieved an 16.35% and 37.07% reduction in execution time for both data sets, respectively, compared to other state-of-the-art methods, which enhanced the acceleration of the convergence process and saved processing power.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call