Abstract

The function of network intrusion detection systems (NIDS) in protecting networks from cyberattacks is crucial. Many of the more conventional techniques rely on signature-based approaches, which have a hard time distinguishing between various types of assaults. Using stacked FT-Transformer architecture, this research suggests a new way to identify intrusions in networks. When it comes to dealing with complicated tabular data, FT-Transformers—a variant of the Transformer model—have shown outstanding performance. Because of the inherent tabular nature of network traffic data, FT-Transformers are an attractive option for intrusion detection jobs. In this area, our study looks at how FT-Transformers outperform more conventional machine learning (ML) methods. Our working hypothesis is that, in comparison to single-layered ML models, FT-Transformers will achieve better detection accuracy due to their intrinsic capacity to grasp long-range correlations in network traffic data. We also test the FT-Transformer model on several network traffic datasets that include various protocols and attack kinds to see how well it performs and how generalizable it is. The purpose of this research is to shed light on how well and how versatile FT-Transformers perform for detecting intrusions in networks. We aim to prove that FT-Transformers can secure networks from ever-changing cyber threats by comparing their performance to that of classic ML models and by testing their generalizability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call