Sort by
CyberAIBot: Artificial Intelligence in an Intrusion Detection System for CyberSecurity in the IoT

The cyber ecosystem presents two interesting properties. Attackers and defenders are normally the same entity; a detailed knowledge of defensive strategies optimises an attack whereas a good defence, based on a layered structure also includes attacks. In addition, Artificial Intelligence (AI) provides new techniques and tools to both attackers and defenders such as Generative Adversarial Networks (GANs) based on Generators and Discriminators for impersonation or Deep Learning (DL) for exhaustive scanning. To address this dilemma, this article presents CyberAIBot: Artificial Intelligence in an Intrusion Detection System for CyberSecurity in the Internet of Things (IoT) aimed at Operational Technology (OT) and Information Technology (IT) network traffic. CyberAIBot is based on a Deep Learning management structure in a private or local edge cloud computing approach where AI makes decisions as close as possible to the source of data. CyberAIBot gradually detects, learns, and adapts to different cyber attacks. In detail, CyberAIBot uses Deep Learning (DL) technical clusters trained in specific attacks and a Deep Learning (DL) management cluster specialises in taking management decisions rather than technical evaluations. This management cluster supervises conflicting classifications from the deep learning technical clusters. CyberAIbot is trained against several datasets and its performance is evaluated between two classification algorithms, the Long Short-Term Memory (LSTM) networks and Support Vector Machines (SVMs). The SVM DL clusters learn faster (average 15x) although the LSTM DL clusters perform better (average 30%). The LSTM DL management cluster performs better than the SVM DL management cluster although it recognises fewer traffic types. The total number of data points analysed by CyberAIBot is 5.52E+08, equivalent to the distance between the planet Earth to the Moon in meters.

Open Access Just Published
Relevant
LGTDA: Bandwidth exhaustion attack on Ethereum via dust transactions

Dust attacks typically involve sending a large number of low-value transactions to numerous addresses, aiming to facilitate transaction tracking and undermine privacy, while simultaneously disrupting the market and increasing transaction delays. These transactions not only impact the network but also incur significant costs. This paper introduces a low-cost attack method called LGTDA, which achieves network partitioning through dust attacks. This method hinders block synchronization by consuming node bandwidth, leading to denial of service(DoS) for nodes and eventually causing large-scale network partitioning. In LGTDA, the attacker does not need to have real control over the nodes in the network, nor is there a requirement for the number of peer connections to the nodes; the attack can even be initiated by simply invoking RPC services to send transactions. Under the condition of ensuring the validity of the attack transactions, the LGTDA attack sends a large volume of low-value, high-frequency dust transactions to the network, relying on nodes for global broadcasting. This sustained attack can significantly impede the growth of block heights among nodes, resulting in network partitioning. We discuss the implications of the LGTDA attack, including its destructive capability, low cost, and ease of execution. Additionally, we analyze the limitations of this attack. Compared to grid lighting attacks, the LGTDA attack has a broader impact range and is not limited by the positional relationship with the victim node. Through experimental validation in a controlled environment, we confirm the effectiveness of this attack.

Just Published
Relevant
Feature Bagging with Nested Rotations (FBNR) for anomaly detection in multivariate time series

Detecting anomalies in multivariate time series poses a significant challenge across various domains. The infrequent occurrence of anomalies in real-world data, as well as the lack of a large number of annotated samples, makes it a complex task for classification algorithms. Deep Neural Network approaches, based on Long Short-Term Memory (LSTMs), Autoencoders, and Variational Autoencoders (VAEs), among others, prove effective with handling imbalanced data. However, the same does not follow when such algorithms are applied on multivariate time-series, as their performance degrades significantly. Our main hypothesis is that the above is due to anomalies stemming from a small subset of the feature set. To mitigate the above issues in the multivariate setting, we propose forming an ensemble of base models by combining different feature selection and transformation techniques. The proposed processing pipeline includes applying a Feature Bagging techniques on multiple individual models, which considers separate feature subsets for each specific model. These subsets are then partitioned and transformed using multiple nested rotations derived from Principal Component Analysis (PCA). This approach aims to identify anomalies that arise from only a small portion of the feature set while also introduces diversity by transforming the subspaces. Each model provides an anomaly score, which are then aggregated, via an unsupervised decision fusion model. A semi-supervised fusion model was also explored, in which a Logistic Regressor was applied on the individual model outputs. The proposed methodology is evaluated on the Skoltech Anomaly Benchmark (SKAB), containing multivariate time series related to water flow in a closed circuit, as well as the Server Machine Dataset (SMD), which was collected from a large Internet company. The experimental results reveal that the proposed ensemble technique surpasses state-of-the-art algorithms. The unsupervised approach demonstrated a performance improvement of 2% for SKAB and 3% for SMD, compared to the baseline models. In the semi-supervised approach, the proposed method achieved a minimum of 10% improvement in terms of anomaly detection accuracy.

Just Published
Relevant
An intelligent native network slicing security architecture empowered by federated learning

Network Slicing (NS) has transformed the landscape of resource sharing in networks, offering flexibility to support services and applications with highly variable requirements in areas such as the next-generation 5G/6G mobile networks (NGMN), vehicular networks, industrial Internet of Things (IoT), and verticals. Although significant research and experimentation have driven the development of network slicing, existing architectures often fall short in intrinsic architectural intelligent security capabilities. This paper proposes an architecture-intelligent security mechanism to improve the NS solutions. We idealized a security-native architecture that deploys intelligent microservices as federated agents based on machine learning, providing intra-slice and architectural operation security for the Slicing Future Internet Infrastructures (SFI2) reference architecture. It is noteworthy that federated-learning approaches match the highly distributed modern microservice-based architectures, thus providing a unifying and scalable design choice for NS platforms addressing both service and security. Using ML-Agents and Security Agents, our approach identified Distributed Denial-of-Service (DDoS) and intrusion attacks within the slice using generic and non-intrusive telemetry records, achieving an average accuracy of approximately 95.60% in the network slicing architecture and 99.99% for the deployed slice – intra-slice. This result demonstrates the potential for leveraging architectural operational security and introduces a promising new research direction for network slicing architectures.

Just Published
Relevant
QM-ARC: QoS-aware Multi-tier Adaptive Cache Replacement Strategy

Distributed data-centric systems, such as Named Data Networking, utilize in-network caching to reduce application latency by buffering relevant data in high-speed memory. However, the significant increase in data traffic makes expanding memory capacity prohibitively expensive. To address this challenge, integrating technologies like non-volatile memory and high-speed solid-state drives with dynamic random-access memory can form a cost-effective multi-tier cache system. Additionally, most existing caching policies focus on categorizing data based on recency and frequency, overlooking the varying Quality-of-Service (QoS) requirements of applications and customers—a concept supported by Service Level Agreements in various service delivery models, particularly in Cloud computing. One of the most prominent algorithms in caching policy literature is the Adaptive Replacement Cache (ARC), that uses recency and frequency lists but does not account for QoS. In this paper, we propose a QoS-aware Multi-tier Adaptive Replacement Cache (QM-ARC) policy. QM-ARC extends ARC by incorporating QoS-based priorities between data applications and customers using a penalty concept borrowed from service-level management practices. QM-ARC is generic, applicable to any number of cache tiers, and can accommodate various penalty functions. Furthermore, we introduce a complementary feature for QM-ARC that employs Q-learning to dynamically adjust the sizes of the two ARC lists. Our solution, evaluated using both synthetic and real-world traces, demonstrates significant improvements in QoS compared to state-of-the-art methods by better considering priority levels. Results show that QM-ARC reduces penalties by up to 45% and increases the hit rate for high priority data by up to 84%, without negatively impacting the overall hit rate, which also increases by up to 61%.

Just Published
Relevant
A priority-aware dynamic scheduling algorithm for ensuring data freshness in 5G networks

To ensure the freshness of information in wireless communication systems, a new performance metric named the age of information (AoI) is being adopted in the design of transmission schedulers. However, most AoI schedulers rely on iterative optimization methods, which struggle to adapt to real-time changes, particularly in real-world 5G deployment scenarios, where network conditions are highly dynamic. In addition, they neglect the impact of consecutive AoI deadline violations, which result in prolonged information deficits. To address these limitations, we present a 5G scheduler that can cope with dynamic network conditions, with the aim of minimizing the long-term average AoI under deadline constraints. Specifically, we consider a dense urban massive machine-type communication (mMTC) scenario in which numerous Internet of Things (IoT) devices frequently join or leave the network under time-varying channel conditions. To facilitate real-time adaptation, we develop a per-slot scheduling method that makes locally optimal decisions for each slot without requiring extensive iterations. In addition, we combine the per-slot scheduling method with a priority-rule scheduling algorithm to satisfy the stringent timing requirements of 5G. The simulation results show that the proposed scheduler reduces the average AoI by 10%, deadline violation rate by 40%, and consecutive violation rate by 20% approximately compared with other AoI schedulers.

Just Published
Relevant