Abstract

The amassed growth in the size of data, caused by the advancement of technologies and the use of internet of things to collect and transmit data, resulted in the creation of large volumes of data and an increasing variety of data types that need to be processed at very high speeds so that we can extract meaningful information from these massive volumes of unstructured data. The process of mining this data is very challenging since a lot of the data suffers from the problem of high dimensionality. The quandary of high dimensionality represents a great challenge that can be controlled through the process of feature selection. Feature selection is a complex task with multiple layers of difficulty. To be able to grasp and realize the impediments associated with high dimensional data a more and in-depth understanding of feature selection is required. In this study, we examine the effect of appropriate feature selection during the classification process of anomaly network intrusion detection systems. We test its effect on the performance of Restricted Boltzmann Machines and compare its performance to conventional machine learning algorithms. We establish that when certain features that are representative of the model are to be selected the change in the accuracy was always less than 3% across all algorithms. This verifies that the accurate selection of the important features when building a model can have a significant impact on the accuracy level of the classifiers. We also confirmed in this study that the performance of the Restricted Boltzmann Machines can outperform or at least is comparable to other well-known machine learning algorithms. Extracting those important features can be very useful when trying to build a model with datasets with a lot of features.

Highlights

  • The progress in streaming analytics, internet of things, artificial intelligence, signal processing, cloud and cognitive computing, and other technological fields facilitated our access to large magnitudes of data that were not available in the past

  • Measuring the impact of the Restricted Boltzmann Machines (RBMs) and the other machine learning algorithms requires us to two important issues

  • The type of the data that is fed to the algorithms, which we discussed in Section 5, and the tuning of the different hyperparameter setting of the RBM to find the optimal earlier in Section 5, and the tuning of the different hyperparameter setting of the RBM to find the or semi-optimal settings that would lead to the best results when testing the algorithm while at the optimal or semi-optimal settings that would lead to the best results when testing the algorithm while same time avoiding problems such as overfitting

Read more

Summary

Introduction

The progress in streaming analytics, internet of things, artificial intelligence, signal processing, cloud and cognitive computing, and other technological fields facilitated our access to large magnitudes of data that were not available in the past. This surge, coupled with the rapid growth in the dimensionality of the data and its volume has created new-found challenges. Different preprocessing approaches were developed to deal with the problem. Many of those approaches involved various methods. One of those most instrumental methods used was feature selection which is considered a vital step during the analysis of the data and when building predictive models while exploring the relations between many of its features

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call