Articles published on IoT Environments
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
1412 Search results
Sort by Recency
- New
- Research Article
- 10.3390/electronics14244816
- Dec 7, 2025
- Electronics
- Hengzhou Ye + 3 more
In large-scale IoT environments, two major challenges—limited edge storage resources and complex task dependencies—make efficient management of service placement and task offloading particularly difficult. Existing approaches often optimize these two aspects independently while overlooking their tight interrelationship, resulting in poor performance in dynamic settings. To address this co-optimization challenge, we propose a Hierarchical Deep Q-Network (HDQN) framework that simultaneously manages service placement and task offloading in task-dependent MEC systems. HDQN divides the decision process into two levels: a meta-controller for long-term service placement and resource planning, and a subcontroller that makes real-time task offloading decisions based on the latest system state. This two-layer structure enables the framework to efficiently adapt to changing conditions while meeting both dependency and resource constraints. Evaluation across diverse experimental conditions—including varying numbers of users, MEC servers, communication rates, and service types—demonstrates that our proposed HDQN framework achieves a significant enhancement in task latency optimization compared to mainstream advanced algorithms like DDPG and DQN, underscoring its superior performance.
- New
- Research Article
- 10.1016/j.mex.2025.103499
- Dec 1, 2025
- MethodsX
- Balaganesh Bojarajulu + 2 more
Enhanced SqueezeNet model for detecting IoT-Bot attacks: A comprehensive approach.
- New
- Research Article
- 10.64509/jicn.12.38
- Nov 27, 2025
- Journal of Intelligent Computing and Networking
- Kunlong Jin + 4 more
To enhance the searchability of encrypted cloud data while preserving user privacy, public-key encryption with keyword search (PEKS) has been regarded as a promising approach. However, existing schemes still incur substantial computational overhead in resource-constrained IoT environments. Moreover, IoT applications frequently reuse certain keywords during operations such as data labeling and status reporting, making them more susceptible to frequency-analysis attacks. To address this, this paper analyzes the limitations of existing pairing-free public-key authenticated encryption with keyword search (PAEKS) schemes in resisting frequency analysis and proposes a pairing-free PAEKS construction based on elliptic-curve scalar multiplication. A probabilistic trapdoor is further introduced to weaken the linkability between keywords and their occurrence frequencies. The proposed scheme effectively mitigates frequency-analysis attacks and, by eliminating costly bilinear pairings, significantly reduces computational burden. Experimental results show that, compared with conventional schemes, the proposed approach achieves lower runtime in ciphertext and trapdoor generation while providing stronger protection against frequency analysis, thereby attaining a more favorable security–efficiency trade-off suitable for IoT deployments with constrained computation and bandwidth.
- New
- Research Article
- 10.11648/j.iotcc.20251304.13
- Nov 26, 2025
- Internet of Things and Cloud Computing
- Boye Frederick + 1 more
The Industrial Internet of Things has enhanced automation, real-time monitoring, and predictive decision-making in modern industries. The study explores the mixed research methods (qualitative and quantitative). However, the growing connectivity of industrial IoT systems has exposed them to severe cyber threats such as Ransomware, MitM, and DDoS attacks, which can disrupt critical operations and compromise safety. Conventional Intrusion Detection Systems (IDS) often face limitations in achieving high accuracy, rapid detection, and low latency while minimizing false alarms. This study proposes a CNN-Fuzzy Logic hybrid model for real-time intrusion detection and prevention in industrial IoT environments. Convolutional Neural Networks (CNN) are employed to extract deep hierarchical features from industrial IoT traffic, while fuzzy logic is integrated to enhance decision-making under uncertainty and reduce false positives. The model was trained and evaluated using Kaggle cybersecurity datasets containing ransomware, MitM, and DDoS attacks. Performance evaluation demonstrates that the CNN-Fuzzy IDS achieves an accuracy of 92.5%, a detection rate of approximately 93%, a false positive rate (FPR) of 2.51%, a reduced latency with an average of 7.14% total latency (which corresponds to 1.207 µsec average latency) is very acceptable for most industrial IoT applications. These results highlight the effectiveness of hybrid intelligent systems in enhancing the resilience and reliability of industrial IoT cybersecurity. The proposed model provides a promising pathway for deploying scalable, adaptive, and real-time IDS solutions in critical industrial infrastructures. On system computational overhead researchers should employ a minimum practical setup with modern multi-core CPU, 8–16 GB RAM, SSD, stable OS (Windows 10 only if hardware is modern) or run a lightweight Linux on edge plus offload heavy tasks elsewhere. Future research should also focus on optimizing hybrid ML architectures for low performance metrics for deployment of resource-constrained industrial IoT devices, integrating the approach for threat detection, and expanding evaluation to real-world industrial environments.
- New
- Research Article
- 10.1007/s11276-025-04035-w
- Nov 24, 2025
- Wireless Networks
- Moses Odiagbe + 2 more
Smart traffic management for VANET and IoT environment using lightweight message verification and DD-Q learning techniques
- New
- Research Article
- 10.3791/68750
- Nov 18, 2025
- Journal of Visualized Experiments
- Muskan Garg + 1 more
A Novel Hybrid Deep Learning Model for Attack Detection in IoT Environment: Convolutional Neural Network with Transformer Approach
- Research Article
- 10.3390/s25226867
- Nov 10, 2025
- Sensors (Basel, Switzerland)
- Mikail Mohammed Salim + 2 more
The integration of federated learning into Industrial Internet of Things (IIoT) networks enables collaborative intelligence but also exposes systems to identity spoofing, model poisoning, and malicious update injection. This paper presents Leash-FL, a lightweight self-healing framework that combines certificateless elliptic curve cryptography with blockchain to enhance resilience in resource-constrained IoT environments. Certificateless ECC with pseudonym rotation enables efficient millisecond-scale authentication with minimal metadata, supporting secure and unlinkable participation. A similarity-governed screening mechanism filters poisoned and free-rider updates, while blockchain-backed checkpoint rollback ensures rapid recovery without service interruption. Experiments on intrusion detection, anomaly detection, and vision datasets show that Leash-FL sustains over 85 percent accuracy with 50 percent malicious clients, reduces backdoor success rates to under 5 percent within four recovery rounds, and restores accuracy up to three times faster than anomaly-screening baselines. The blockchain layer achieves low-latency consensus, high throughput, and modest ledger growth, significantly outperforming Ethereum-based systems. Membership changes are efficiently managed with sub-50 ms join and leave operations and re-admission within 60 ms, while guaranteeing forward and backward secrecy. Leash-FL delivers a cryptography-driven approach that unifies lightweight authentication, blockchain auditability, and self-healing recovery into a secure, resilient, and scalable federated learning solution for next-generation IIoT networks.
- Research Article
- 10.1080/1448837x.2025.2583458
- Nov 10, 2025
- Australian Journal of Electrical and Electronics Engineering
- Baolin Zheng + 1 more
ABSTRACT Patient care has been transformed with the increasing adoption of Healthcare IoT (H-IoT) systems, allowing for continuous monitoring and personalised treatment. The connected medical devices pose some significant challenges for security and privacy because such systems produce sensitive personal data in large quantities. To address the above challenges, SecureFogDL, a federated BERT-based Transformer classifier framework to improve security, privacy, and performance for healthcare IoT on top of fog computing, is proposed in this paper. This framework uses federated learning techniques to keep sensitive data decentralised and private, while at the same time applying a BERT-based BERT-based Transformer classifier classifier to identify kinds of attacks correctly and mitigate them, such as DDoS attacks. Autoencoders are applied for feature extraction while reducing the complexity of IoT traffic, in favour of better performance of models on fog nodes with limited resources. SecureFogDL presents a promote scalable and preserves privacy solution for attack detection and decision-making within healthcare IoT environments.
- Research Article
- 10.1002/spy2.70142
- Nov 1, 2025
- SECURITY AND PRIVACY
- Supongmen Walling + 1 more
ABSTRACT The rapid growth of IoT technology brings unprecedented connectivity and convenience but also introduces serious security challenges. Addressing these vulnerabilities is critical, necessitating robust intrusion detection techniques. Anomaly‐based Network Intrusion Detection Systems (NIDS) play a pivotal role in securing IoT networks, acting as a cornerstone of modern cybersecurity infrastructure. However, due to the resource constraints and protocol diversity inherent in IoT environments, effective and efficient feature selection becomes essential. This research proposes a novel anomaly‐based NIDS framework that incorporates an adaptive, attack‐aware feature selection strategy. Initially, two well‐established filter‐based techniques—one‐way ANOVA and Correlation Feature Selection (CFS)—are employed to score feature relevance. Rather than applying a static selection threshold uniformly across the dataset, we introduce a percentile‐based adaptive thresholding mechanism that adjusts dynamically based on the imbalance and statistical distribution of each attack category. This ensures that feature selection remains sensitive to the varying relevance of features across different attack types, enabling better generalization and discrimination in heterogeneous traffic patterns. Selected features from ANOVA and CFS are then integrated using union and intersection operations derived from set theory to construct optimal feature subsets. These refined subsets are fed into a two‐level stacking ensemble classifier trained on attack‐specific patterns. Our framework is evaluated on three benchmark datasets—NSL‐KDD, UNSW‐NB15, and CICIDS‐2017—where it consistently outperforms existing methods. Notably, it achieves 98.126% accuracy on UNSW‐NB15 (a 3.026‐point improvement over [23]), 99.435% on NSL‐KDD (surpassing [22] by 3.835 points), and a near‐perfect 99.96% on CICIDS‐2017—the highest reported accuracy in the current literature. These results validate the effectiveness of the proposed adaptive approach and establish new benchmarks for intrusion detection in IoT ecosystems.
- Research Article
- 10.1016/j.oceaneng.2025.122225
- Nov 1, 2025
- Ocean Engineering
- Xiangfei Meng + 5 more
Finite-time output feedback control of unmanned marine surface vessels with rotatable thrusters and propellers under deception attacks in an IoT environment
- Research Article
- 10.1016/j.adhoc.2025.103974
- Nov 1, 2025
- Ad Hoc Networks
- Saugata Roy + 2 more
A multi-depot provisioned UAV swarm trajectory optimization scheme for collaborative data acquisition in a large-scale IoT environment
- Research Article
- 10.1038/s41598-025-22070-7
- Oct 31, 2025
- Scientific Reports
- Hanan Abdullah Mengash + 2 more
Gesture recognition (GR) is an emerging and wide-ranging area of research. GR is extensively applied in sign language, Immersive game technology, and other computer interfaces, among others. People with visual impairments face challenges in completing tasks, including navigating environments, using technologies, and engaging in social interactions. Additionally, people face challenges in balancing their individuality with the need for protection in their day-to-day work. It is likely to recognize the communication of visually challenged and deaf people by recording their speech, and in comparison, with recent datasets, hence establishing their objectives. The conventional machine learning (ML) model attempts to utilize handcrafted features, but often underperforms in real-time environments. Deep learning (DL) models have become a sensation amongst investigators recently, making conventional ML techniques comparatively old. Therefore, this study presents a new approach, Enhancing Gesture Recognition for the Visually Impaired using Deep Learning and an Improved Snake Optimization Algorithm (EGRVI-DLISOA), in an IoT environment. The EGRVI-DLISOA approach is an advanced GR system powered by DL in an IoT environment, designed to provide real-time interpretation of gestures to assist the visually impaired. Initially, the EGRVI-DLISOA technique utilizes the Sobel filter (SF) technique for the noise elimination process. For feature extraction, the SqueezeNet model is utilized due to its efficiency in capturing meaningful features from complex visual data. For an accurate GR process, the long short-term memory (LSTM) approach is implemented. To fine-tune the hyperparameter values of the LSTM classifier, the improved snake optimization algorithm (ISOA) is utilized. The experimentation of the EGRVI-DLISOA technique is investigated under the hand gestures dataset. The comparison study of the EGRVI-DLISOA technique revealed a superior accuracy value of 98.62% compared to existing models.
- Research Article
- 10.1038/s41598-025-22117-9
- Oct 31, 2025
- Scientific Reports
- Gaurav Verma + 4 more
In modern precision agriculture, early and accurate identification of crop diseases is crucial for reducing yield loss and minimizing pesticide overuse. This study proposes an IoT-enabled framework that integrates convolutional neural networks (CNNs) with image processing techniques for automated classification and quantification of diseases in rice and potato crops. A custom-curated dataset was developed, comprising over 1,800 images acquired through smartphone cameras and foldscope devices under natural lighting conditions. The proposed CNN model achieved a classification accuracy of over 95%, with a disease quantification accuracy of 90.5%, calculated using pixel-level segmentation of infected regions. Experimental results revealed infection percentages ranging from 0.68% in early-stage cases to 13.98% in severely affected samples, enabling precise disease severity analysis. The framework includes a MATLAB-based graphical user interface (GUI) for real-time visualization of classification results and severity scores. Training convergence was demonstrated with a mini-batch loss reduction from 1.0879 to 0.0094 over 200 iterations, and classification confidence scores exceeding 90% for most disease categories. In addition to software implementation, the model was synthesized for hardware deployment using FPGA, demonstrating less than 5% LUT and 1% register usage for 512 × 512 images, ensuring resource-efficient performance in IoT environments. This work introduces a scalable, field-deployable tool for crop health monitoring, with potential to enhance sustainable farming practices through timely disease management.
- Research Article
- 10.47760/cognizance.2025.v05i10.026
- Oct 30, 2025
- Cognizance Journal of Multidisciplinary Studies
- Jhon Ludwig C Gayapa + 4 more
The interconnected smart equipment which can be produced at home, in the health care, transport, and critical infrastructure is made possible by the Internet of Things ecosystem, invented at a rapid pace. The fact that there is this connectivity that is accompanied by maximum levels of efficiency and convenience coupled with high levels of security lapses. The signature driven or rule based approach is no longer providing the timely and even effective as well as efficient defense as the IoT ecosystems start expanding in size and complexity. The present paper is a review of DRC systems intrusion detection and performance based on such important requirements as accuracy of detection, latency, adaptability and scalability. The findings show high likelihood of the DRM models improving the performance of the actual-time threat detectors to some level of the computational effectiveness. However, the problem of training overhead, the interpretability of the decisions, and the exploitation of energy of resource-constrained devices should also be mentioned before it spreads even more. Lastly, the research provides functionality information of the possibility of DRL as a potential security system to the IoT systems. This study can be regarded as a step towards smart, dynamic, and scalable cybersecurity devices, which once can be expanded to guarantee the survival of infrastructures based on IoT to a new level of cyber-attacks by exposing the advantages and disadvantages of using DRL models in practice.
- Research Article
- 10.9734/jerr/2025/v27i111696
- Oct 29, 2025
- Journal of Engineering Research and Reports
- Emonena Patrick Obrik-Uloho + 4 more
This research developed a predictive and cybersecurity-aware framework to uncover and leverage underreported clinical and operational signals within dark data embedded in digital health ecosystems. Addressing the paradox of data-rich yet insight-poor healthcare systems, the study adopted a sequential explanatory mixed-methods design that combined quantitative machine learning analysis with qualitative stakeholder evaluation. The datasets incorporated sources such as CIC-IDS-2018, IoT-23, and the Zero-Day Exploit Corpus, reflecting medical IoT environments like smartwatches and insulin pumps connected through Wi-Fi, Bluetooth, and 5G networks. Neural network models achieved an overall threat and anomaly detection rate of 95.9%, with cardiac monitor data performing best at 97.1% due to distinctive behavioral patterns. The framework identified novel clinical and cyber-physical signals, improving rare disease detection and reducing false positives, thereby enhancing reliability and trust. Qualitative feedback from healthcare practitioners confirmed the system’s usability and interpretability. The integration of adversarial simulation data strengthened resilience against zero-day threats, positioning the framework as a scalable solution for improving patient safety, regulatory compliance, and precision medicine in digital healthcare.
- Research Article
- 10.3390/s25216632
- Oct 29, 2025
- Sensors (Basel, Switzerland)
- Shenghao Nie + 3 more
In the booming digital economy, data circulation—particularly for massive multimodal data generated by IoT sensor networks—faces critical challenges: ambiguous ownership and broken cross-domain traceability. Traditional property rights theory, ill-suited to data’s non-rivalrous nature, leads to ownership fuzziness after multi-source fusion and traceability gaps in cross-organizational flows, hindering marketization. This study aims to establish native ownership confirmation capabilities in trusted IoT-driven data ecosystems. The approach involves a dual-factor system: the collaborative extraction of text (from sensor-generated inspection reports), numerical (from industrial sensor measurements), visual (from 3D scanning sensors), and spatio-temporal features (from GPS and IoT device logs) generates unique SHA-256 fingerprints (first factor), while RSA/ECDSA private key signatures (linked to sensor node identities) bind ownership (second factor). An intermediate state integrates these with metadata, supported by blockchain (consortium chain + IPFS) and cross-domain protocols optimized for IoT environments to ensure full-link traceability. This scheme, tailored to the characteristics of IoT sensor networks, breaks traditional ownership confirmation bottlenecks in multi-source fusion, demonstrating strong performance in ownership recognition, anti-tampering robustness, cross-domain traceability and encryption performance. It offers technical and theoretical support for standardized data components and the marketization of data elements within IoT ecosystems.
- Research Article
- 10.1007/s11277-025-11847-8
- Oct 28, 2025
- Wireless Personal Communications
- Lacchita Soni + 2 more
LB-RFID: Provably Secure Post-quantum Authentication Protocol for RFID Devices in Resource-constrained IoT Environment
- Research Article
- 10.1007/s41870-025-02817-1
- Oct 25, 2025
- International Journal of Information Technology
- Vadala Nagamani + 1 more
Multi objective hybrid optimization for secure and energy effective data communication in wireless sensor network for IoT environment
- Research Article
- 10.1007/s11416-025-00582-0
- Oct 22, 2025
- Journal of Computer Virology and Hacking Techniques
- Milad Rahmati + 1 more
Adaptive Federated Edge Intelligence for Real-Time Cyberthreat Detection in Resource-Constrained IoT Environments: A Lightweight Deep Learning Approach
- Research Article
- 10.1038/s41598-025-20175-7
- Oct 16, 2025
- Scientific Reports
- J Jasmine Shirley + 1 more
The recent decade has seen enormous growth in the Internet of Things field. This development has significantly expanded the space for cyber-threats, among which the Distributed Denial of Service attacks have become one of the most important and common threats. These attacks might severely disrupt critical services if not detected and handled on time. To provide a reliable and secure IoT environment, accurate and effective mechanisms for detecting DDoS attacks in real-time are the most required. While state-of-the-art deep learning models like CNNs and LSTMs offer high accuracy, their computational overhead often makes them unsuitable for resource-constrained IoT environments. To address this gap, we have proposed a robust hybrid framework, the PSO-DT-based BagDT ensemble model. This model utilizes Particle Swarm Optimization in combination with Decision Tree for effectively finding the best feature subset. This lowers the dimension by reducing complexity. The proposed PSO-DT feature selection algorithm is evaluated across variants of ensemble learners, namely Random Subspace KNN, AdaBoost, RUSBoost, and Bagged Decision Trees. The PSO-DT helps in reducing the computational cost and the model size. Our PSO-DT based Bagged DT model demonstrates superior performance, achieving an accuracy of 99.96 % along with a macro-average precision, recall, and F1-score of 0.99. Among all the variants, BagDT performed better with an increase in accuracy by 4.13% and a reduction in training time by 95.49%. The overall throughput is increased by 63.52% thereby confirming the efficiency of the proposed PSO-DT-based BagDT Ensemble model for providing a real-time, scalable solution that is appropriate for implementation in contemporary smart environments.