Abstract

The Computer architecture research extensively studies system resource consumption by algorithms and applications. On the other hand, Machine Learning (ML) research focuses on obtaining high levels of accuracy without any computational constraint. The typical approach for addressing the need for higher compute power and system resources for ML tasks is to add more hardware and employ lighter frameworks (e.g., using TensorFlow Lite and PyTorch Mobile instead of TensorFlow and Pytorch, respectively). Extensive use of ML models for applications, especially in Internet of Things (IoT) security requires investigation of resource consumption of ML models. Most tasks employing Artificial Intelligence/ Machine Learning need to choose appropriate models judiciously considering the system resources consumed. Therefore it is required to investigate various ML techniques and benchmark them in terms of the system resources (CPU and memory) consumed, especially for IoT applications employing ML/ DL methods. In this work, we benchmark and explore network packet clustering and its impact on the trade-off between performance metrics (Accuracy, F1 score) and system resources (CPU and memory) consumed by the commonly used ML algorithms in the context of botnet detection in IoT networks. We focus on system resources consumed by ML algorithms and not on optimising ML algorithms for resource constraints or application workloads. We show that system resource consumption decreases with increase in aggregation size, with at least 20 % improvement in F1 scores but a slight reduction in accuracy. Based on application context and system constraints, appropriate packet batch sizes and ML algorithms can be thus chosen for more rapid prototyping of AI/ ML based applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call