Abstract
AbstractThe Internet of Things (IoT) has gained widespread importance in recent time. However, the related issues of security and privacy persist in such IoT networks. Owing to device limitations in terms of computational power and storage, standard protection approaches cannot be deployed. In this article, we propose a lightweight distributed intrusion detection system (IDS) framework, called FCAFE‐BNET (Fog based Context Aware Feature Extraction using BranchyNET). The proposed FCAFE‐BNET approach considers versatile network conditions, such as varying bandwidths and data loads, while allocating inference tasks to cloud/edge resources. FCAFE‐BNET is able to adjust to dynamic network conditions. This can be advantageous for applications with particular quality of service requirements, such as video streaming or real‐time communication, ensuring a steady and reliable performance. Early exit deep neural networks (DNNs) have been employed for faster inference generation at the edge. Often, the weights that the model learns in the initial layer may be sufficiently qualified to perform the required classification tasks. Instead of using subsequent layers of DNNs for generating the inference, we have employed the early‐exit mechanism in the DNNs. Such DNNs help to predict a wide range of testing samples through these early‐exit branches, upon crossing a threshold. This method maintains the confidence values corresponding to the inference. Employing this approach, we achieved a faster inference, with significantly high accuracy. Comparative studies exploit manual feature extraction techniques, that can potentially overlook certain valuable patterns, thus degrading classification performance. The proposed framework converts textual/tabular data into 2‐D images, allowing the DNN model to autonomously learns its own features. This conversion scheme facilitated the identification of various intrusion types, ranging from 5 to 14 different categories. FCAFE‐BNET works for both network‐based and host‐based IDS: NIDS and HIDS. Our experiments demonstrate that, in comparison with recent approaches, FCAFE‐BNET achieves a 39.12%–50.23% reduction in the total inference time on benchmark real‐world datasets, such as: NSL‐KDD, UNSW‐NB 15, ToN_IoT, and ADFA_LD.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.