Performance Analysis of a CNN-Fuzzy Logic Based Real-time Intrusion Detection for Industrial IoT Systems
The Industrial Internet of Things has enhanced automation, real-time monitoring, and predictive decision-making in modern industries. The study explores the mixed research methods (qualitative and quantitative). However, the growing connectivity of industrial IoT systems has exposed them to severe cyber threats such as Ransomware, MitM, and DDoS attacks, which can disrupt critical operations and compromise safety. Conventional Intrusion Detection Systems (IDS) often face limitations in achieving high accuracy, rapid detection, and low latency while minimizing false alarms. This study proposes a CNN-Fuzzy Logic hybrid model for real-time intrusion detection and prevention in industrial IoT environments. Convolutional Neural Networks (CNN) are employed to extract deep hierarchical features from industrial IoT traffic, while fuzzy logic is integrated to enhance decision-making under uncertainty and reduce false positives. The model was trained and evaluated using Kaggle cybersecurity datasets containing ransomware, MitM, and DDoS attacks. Performance evaluation demonstrates that the CNN-Fuzzy IDS achieves an accuracy of 92.5%, a detection rate of approximately 93%, a false positive rate (FPR) of 2.51%, a reduced latency with an average of 7.14% total latency (which corresponds to 1.207 µsec average latency) is very acceptable for most industrial IoT applications. These results highlight the effectiveness of hybrid intelligent systems in enhancing the resilience and reliability of industrial IoT cybersecurity. The proposed model provides a promising pathway for deploying scalable, adaptive, and real-time IDS solutions in critical industrial infrastructures. On system computational overhead researchers should employ a minimum practical setup with modern multi-core CPU, 8–16 GB RAM, SSD, stable OS (Windows 10 only if hardware is modern) or run a lightweight Linux on edge plus offload heavy tasks elsewhere. Future research should also focus on optimizing hybrid ML architectures for low performance metrics for deployment of resource-constrained industrial IoT devices, integrating the approach for threat detection, and expanding evaluation to real-world industrial environments.
- Research Article
19
- 10.1109/jiot.2020.3044057
- Dec 15, 2020
- IEEE Internet of Things Journal
The Industrial Internet of Things (IIoT) is a critically important implementation of the Internet of Things (IoT), connecting IoT devices ubiquitously in an industrial environment. Based on the interconnection of IoT devices, IIoT applications can collect and analyze sensing data, which help operators to control and manage manufacturing systems, leading to significant performance improvements and enabling automation. IIoT systems are characterized by a variety of IIoT applications, which generate different computing tasks depending on their functionalities. Some tasks are time sensitive (TS), while others are not, and more importantly, some tasks are nonpreemptive in IIoT scenarios. Thus, processing the different IIoT applications efficiently in an IIoT environment is key to achieving automation. Since computing resources are limited in IIoT, how to rapidly process TS tasks is a critical issue. Although some existing scheduling schemes can deal with the latency requirements of TS tasks, they lack consideration for nonpreemptive tasks. To address this issue, in this article we consider a typical smart warehouse system as an example and propose a generic task scheduling scheme that reserves computing resources to wait for upcoming TS tasks in such an IIoT environment. In doing so, our proposed scheme is capable of minimizing the overall waiting time for TS tasks. To evaluate the proposed scheme, we have implemented a simulation platform for a smart warehouse and conducted extensive experiments. Our experimental results demonstrate the efficacy of our scheme, which can allocate computing resources so that the processing time for the TS tasks can be reduced. Additionally, we discuss some potential research directions toward improving performance in IIoT environments with respect to resource management, machine learning, and security and privacy.
- Research Article
2
- 10.3390/fi17070279
- Jun 24, 2025
- Future Internet
The rapid expansion of Edge and Industrial Internet of Things (IIoT) systems has intensified the risk and complexity of cyberattacks. Detecting advanced intrusions in these heterogeneous and high-dimensional environments remains challenging. As the IIoT becomes integral to critical infrastructure, ensuring security is crucial to prevent disruptions and data breaches. Traditional IDS approaches often fall short against evolving threats, highlighting the need for intelligent and adaptive solutions. While deep learning (DL) offers strong capabilities for pattern recognition, single-model architectures often lack robustness. Thus, hybrid and optimized DL models are increasingly necessary to improve detection performance and address data imbalance and noise. In this study, we propose an optimized hybrid DL framework that combines a transformer, generative adversarial network (GAN), and autoencoder (AE) components, referred to as Transformer–GAN–AE, for robust intrusion detection in Edge and IIoT environments. To enhance the training and convergence of the GAN component, we integrate an improved chimp optimization algorithm (IChOA) for hyperparameter tuning and feature refinement. The proposed method is evaluated using three recent and comprehensive benchmark datasets, WUSTL-IIoT-2021, EdgeIIoTset, and TON_IoT, widely recognized as standard testbeds for IIoT intrusion detection research. Extensive experiments are conducted to assess the model’s performance compared to several state-of-the-art techniques, including standard GAN, convolutional neural network (CNN), deep belief network (DBN), time-series transformer (TST), bidirectional encoder representations from transformers (BERT), and extreme gradient boosting (XGBoost). Evaluation metrics include accuracy, recall, AUC, and run time. Results demonstrate that the proposed Transformer–GAN–AE framework outperforms all baseline methods, achieving a best accuracy of 98.92%, along with superior recall and AUC values. The integration of IChOA enhances GAN stability and accelerates training by optimizing hyperparameters. Together with the transformer for temporal feature extraction and the AE for denoising, the hybrid architecture effectively addresses complex, imbalanced intrusion data. The proposed optimized Transformer–GAN–AE model demonstrates high accuracy and robustness, offering a scalable solution for real-world Edge and IIoT intrusion detection.
- Research Article
- 10.11113/ijic.v15n1.544
- May 27, 2025
- International Journal of Innovative Computing
The rise of Industry 4.0 has led to the widespread adoption of Industrial Internet of Things (IIoT) devices, enhancing manufacturing efficiency while introducing significant cybersecurity risks. IIoT environments are highly susceptible to cyber threats such as Denial-of-Service (DoS), SQL injection, and ransomware, which can lead to production downtime and data breaches. Traditional intrusion detection systems (IDS) often fail to detect evolving threats, resulting in high false negative rates. This research proposes an advanced IDS integrating Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM) to enhance IIoT security. By leveraging both spatial and temporal feature extraction, the proposed model effectively identifies network anomalies in real-time industrial environments. This study contributes to IIoT cybersecurity by developing an IDS capable of improving threat detection through the integration of CNN and LSTM architectures. The approach enhances pattern recognition and sequential dependency modeling, making it more adaptive to dynamic cyber threats. The model is trained and evaluated on a large-scale IIoT dataset, achieving a binary classification accuracy of 71%, outperforming several state-of-the-art models. The CNN-LSTM IDS demonstrates a strong ability to recognize normal traffic, with a recall of 99%, significantly reducing false alarms. In multi-class classification, the model successfully identifies certain high-volume attack types, such as DDoS. These findings underscore both the strengths and limitations of deep learning-based intrusion detection in IIoT environments. While the proposed model offers significant improvements, further research is needed to address the detection of low-frequency attacks and optimize classification performance.
- Research Article
18
- 10.3390/sym15101958
- Oct 23, 2023
- Symmetry
This article addresses the issue of information security in the Industrial Internet of Things (IIoT) environment. Information security risk assessment in the IIoT is complicated by several factors: the complexity and heterogeneity of the system, the dynamic nature of the system, the distributed network infrastructure, the lack of standards and guidelines, and the increased consequences of security breaches. Given these factors, information security risk assessment in the IIoT requires a comprehensive approach adapted to the peculiarities and requirements of a particular system and industry. It is necessary to use specialized risk assessment methods and to take into account the context and peculiarities of the system. The method of information security risk assessment in the IIoT, based on the mathematical apparatus of fuzzy set theory, is proposed. This paper analyzes information security threats for IIoT systems, from which the most significant criteria are selected. The rules, based on which decisions are made, are formulated in the form of logical formulas containing input parameters. Three fuzzy inference systems are used: one to estimate the probability of threat realization, another to estimate the probable damage, and a final one to estimate the information security risk for the IIoT system. Based on the proposed method, examples of calculating the information security risk assessment in the IIoT environment are provided. The proposed scientific approach can serve as a foundation for creating expert decision support systems for designing IIoT systems.
- Research Article
9
- 10.1109/tii.2020.2983387
- Apr 3, 2020
- IEEE Transactions on Industrial Informatics
Trustworthiness is the probability that a system will function according to intended behaviors under a set of circumstances as demonstrated by qualities including, but not limited to safety, security, privacy, reliability, real timeliness. Trustworthiness in the industrial Internet of Things (IIoT) systems and applications is crucial to a vital expectation of industrial investors. Preserving the trustworthiness of such a system and network is crucial to void cost, time, and loss of lives. A trustworthy IIoT system considers both the security characteristics and system functionalities (faults, failures) of IoT trustworthiness. Traditional security systems, tools, techniques, and apps are not enough to protect the IIoT platform due to the practical facts in real industrial environments, diverse protocols, constrained upgrade opportunities, mismatch in protocols, and resource constraints in the industrial system. Regarding these concerns, this special section targets to bring up-to-date research results of IIoT systems and applications with the trustworthiness support. This leads to steady operations and high-quality results and improved safety in the IIoT.
- Research Article
23
- 10.1016/j.jnca.2023.103809
- Dec 4, 2023
- Journal of Network and Computer Applications
Securing the Industrial Internet of Things against ransomware attacks: A comprehensive analysis of the emerging threat landscape and detection mechanisms
- Research Article
332
- 10.1109/mcom.2018.1700622
- Feb 1, 2018
- IEEE Communications Magazine
The emergence of the Industrial Internet of Things (IIoT) has paved the way to real-time big data storage, access, and processing in the cloud environment. In IIoT, the big data generated by various devices such as-smartphones, wireless body sensors, and smart meters will be on the order of zettabytes in the near future. Hence, relaying this huge amount of data to the remote cloud platform for further processing can lead to severe network congestion. This in turn will result in latency issues which affect the overall QoS for various applications in IIoT. To cope with these challenges, a recent paradigm shift in computing, popularly known as edge computing, has emerged. Edge computing can be viewed as a complement to cloud computing rather than as a competition. The cooperation and interplay among cloud and edge devices can help to reduce energy consumption in addition to maintaining the QoS for various applications in the IIoT environment. However, a large number of migrations among edge devices and cloud servers leads to congestion in the underlying networks. Hence, to handle this problem, SDN, a recent programmable and scalable network paradigm, has emerged as a viable solution. Keeping focus on all the aforementioned issues, in this article, an SDN-based edge-cloud interplay is presented to handle streaming big data in IIoT environment, wherein SDN provides an efficient middleware support. In the proposed solution, a multi-objective evolutionary algorithm using Tchebycheff decomposition for flow scheduling and routing in SDN is presented. The proposed scheme is evaluated with respect to two optimization objectives, that is, the trade-off between energy efficiency and latency, and the trade-off between energy efficiency and bandwidth. The results obtained prove the effectiveness of the proposed flow scheduling scheme in the IIoT environment.
- Conference Article
8
- 10.1109/dcoss.2019.00048
- May 1, 2019
Executing analytics functionalities over data from highly distributed data sources and data streams is at the very core of the vast majority of Industrial Internet of Things (IIoT) applications. State of the art streaming engines provide the means for high performance analytics over high velocity IIoT streams, yet they still need significant programming and customization efforts when deployed in heterogeneous industrial environments. This paper introduces a configurable engine for distributed data analytics for IIoT applications. The engine leverages the performance of state of the art data streaming middleware platforms, which it augments with a set of digital models for configuring DDA operations. As such the introduced engine reduces the effort needed to implement and deploy distributed data analytics in IIoT environments. The engine is available as open source software and has been validated in a various real-life IIoT applications in different environments.
- Research Article
1
- 10.1007/s13369-024-09663-6
- Oct 21, 2024
- Arabian Journal for Science and Engineering
Cyber-attack detection within Industrial Internet of Things (IIoT) environments presents unique challenges due to the complex, resource-constrained, and real-time nature of these networks. Traditional detection techniques often struggle to adapt to the dynamic environment of IIoT. For instance, many existing methods rely on signature-based detection, which fails to identify evolving threats. Other approaches, such as anomaly-based detection, can generate a high rate of False Positives, leading to inefficiencies in threat management. To address these challenges, we propose a novel detection and classification model specifically tailored for IIoT environments. The proposed model integrates Genetic Algorithms (GA) and Deep Learning (DL) to enhance cyber-attack detection within IIoT environments. The GA component optimises feature selection from raw network data, ensuring the extraction of meaningful and relevant features. Leveraging these selected features, the DL component constructs a robust model capable of accurately detecting and classifying various cyber-attack patterns across IIoT devices. Through experimentation on real-world IIoT network traffic (UNSW-NB 15 dataset), the proposed approach demonstrates its efficacy in improving attack detection accuracy and adaptability. The integration of GA and DL offers a synergistic solution that addresses the complexities of IIoT cybersecurity, contributing to a more secure and resilient IIoT ecosystem. The integrated GA–DL classification model developed in this work achieved 98% precision, 96% accuracy, 94% recall, and 12% losses with only less than 50% of the features of the UNSW-NB 15 dataset. The reduction in features required for the identification and classification of cyber-attacks reduces the processing time by 50%.
- Research Article
90
- 10.1016/j.adhoc.2023.103320
- Oct 10, 2023
- Ad Hoc Networks
Blockchain and federated learning-based intrusion detection approaches for edge-enabled industrial IoT networks: a survey
- Research Article
3
- 10.3390/app15063121
- Mar 13, 2025
- Applied Sciences
The rapid expansion of the Industrial Internet of Things (IIoT) has revolutionized industrial automation and introduced significant cybersecurity challenges, particularly for supervisory control and data acquisition (SCADA) systems. Traditional intrusion detection systems (IDSs) often struggle to effectively identify and mitigate complex cyberthreats, such as denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks. This study proposes an advanced IDS framework integrating machine learning, deep learning, and hybrid models to enhance cybersecurity in IIoT environments. Using the WUSTL-IIoT-2021 dataset, multiple classification models—including decision tree, random forest, multilayer perceptron (MLP), convolutional neural networks (CNNs), and hybrid deep learning architectures—were systematically evaluated based on key performance metrics, including accuracy, precision, recall, and F1 score. This research introduces several key innovations. First, it presents a comparative analysis of machine learning, deep learning, and hybrid models within a unified experimental framework, offering a comprehensive evaluation of various approaches. Second, while existing studies frequently favor hybrid models, findings from this study reveal that the standalone MLP model outperforms other architectures, achieving the highest detection accuracy of 99.99%. This outcome highlights the critical role of dataset-specific feature distributions in determining model effectiveness and calls for a more nuanced approach when selecting detection models for IIoT cybersecurity applications. Additionally, the study explores a broad range of hyperparameter configurations, optimizing model effectiveness for IIoT-specific intrusion detection. These contributions provide valuable insights for developing more efficient and adaptable IDS solutions in IIoT networks.
- Research Article
59
- 10.19150/me.7919
- Dec 1, 2017
- Mining Engineering
The Industrial Internet of Things (IIoT), a concept that combines sensor networks and control systems, has been employed in several industries to improve productivity and safety. U.S. National Institute for Occupational Safety and Health (NIOSH) researchers are investigating IIoT applications to identify the challenges of and potential solutions for transferring IIoT from other industries to the mining industry. Specifically, NIOSH has reviewed existing sensors and communications network systems used in U.S. underground coal mines to determine whether they are capable of supporting IIoT systems. The results show that about 40 percent of the installed post-accident communication systems as of 2014 require minimal or no modification to support IIoT applications. NIOSH researchers also developed an IIoT monitoring and control prototype system using low-cost microcontroller Wi-Fi boards to detect a door opening on a refuge alternative, activate fans located inside the Pittsburgh Experimental Mine and actuate an alarm beacon on the surface. The results of this feasibility study can be used to explore IIoT applications in underground coal mines based on existing communication and tracking infrastructure.
- Research Article
40
- 10.3390/app11209393
- Oct 10, 2021
- Applied Sciences
Industrial Internet of Things (IIoT) can be seen as an extension of the Internet of Things (IoT) services and applications to industry with the inclusion of Industry 4.0 that provides automation, reliability, and control in production and manufacturing. IIoT has tremendous potential to accelerate industry automation in many areas, including transportation, manufacturing, automobile, marketing, to name a few places. When the benefits of IIoT are visible, the development of large-scale IIoT systems faces various security challenges resulting in many large-scale cyber-attacks, including fraudulent transactions or damage to critical infrastructure. Moreover, a large number of connected devices over the Internet and resource limitations of the devices (e.g., battery, memory, and processing capability) further pose challenges to the system. The IIoT inherits the insecurities of the traditional communication and networking technologies; however, the IIoT requires further effort to customize the available security solutions with more focus on critical industrial control systems. Several proposals discuss the issue of security, privacy, and trust in IIoT systems, but comprehensive literature considering the several aspects (e.g., users, devices, applications, cascading services, or the emergence of resources) of an IIoT system is missing in the present state of the art IIoT research. In other words, the need for considering a vision for securing an IIoT system with broader security analysis and its potential countermeasures is missing in recent times. To address this issue, in this paper, we provide a comparative analysis of the available security issues present in an IIoT system. We identify a list of security issues comprising logical, technological, and architectural points of view and consider the different IIoT security requirements. We also discuss the available IIoT architectures to examine these security concerns in a systematic way. We show how the functioning of different layers of an IIoT architecture is affected by various security issues and report a list of potential countermeasures against them. This study also presents a list of future research directions towards the development of a large-scale, secure, and trustworthy IIoT system. The study helps understand the various security issues by indicating various threats and attacks present in an IIoT system.
- Research Article
41
- 10.1002/ett.4222
- Jan 28, 2021
- Transactions on Emerging Telecommunications Technologies
By providing ubiquitous connectivity, effective data analytics tools, and better decision support systems for improved market competitiveness, the industrial internet of things (IIoT) promises creative business models in different industrial domains. However, the conventional IIoT architecture can no longer provide adequate support for such an enormous device as the number of nodes, and network size increases. Therefore, several challenges, such as security, privacy, centralization, trust, and integrity prevents faster adaptation of IIoT applications. To address aforementioned challenges, we present a deep blockchain‐based trustworthy privacy‐preserving secured framework (DBTP2SF) for IIoT environment. This framework comprises of three modules, namely, trust management module, a two‐level privacy‐preservation module, and an anomaly detection module. In trustworthiness module, blockchain (BC)‐based address reputation system is proposed. In the two‐level privacy module a BC‐based enhanced proof of work technique is simultaneously applied with AutoEncoder, to transform cyber‐physical system data into a new reduced form that prevents inference and poisoning attacks. In the anomaly detection module, deep neural network is deployed. Finally, due to various limitations of current Cloud‐Fog infrastructure, we present a BC‐interplanetary file systems integrated Cloud‐Fog architecture, namely, BlockCloud and BlockFog to deploy proposed DBTP2SF framework in IIoT environment. The experiment is conducted using IIoT‐based realistic dataset, namely, ToN‐IoT. The performance analysis shows that the proposed approach outperforms using transformed dataset over peer privacy‐preserving intrusion detection strategies, and has obtained accuracy of 98.97%, and detection rate of 93.87%. Finally, we have shown the superiority of DBTP2SF framework over some of the recent state‐of‐art techniques in both non‐BC and BC‐based IIoT system.
- Research Article
16
- 10.1016/j.iswa.2023.200298
- Nov 1, 2023
- Intelligent Systems with Applications
A comparative evaluation of intrusion detection systems on the edge-IIoT-2022 dataset
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.