Abstract

Machine Learning (ML) algorithms have become prevalent in today's digital world. However, training, testing and deployment of ML models consume a lot of energy, particularly when the datasets are huge. Consequently, this would have a direct and adverse impact on the environment due to its Scope 2 emissions. Thus, it will be beneficial we explore the environment impact of ICT usage within an organisation. Additionally, it is vital to adopt energy consumption as a metric for the evaluation of existing and future ML models. Our research goal is to evaluate the energy consumption of a set of widely used ML classifier algorithms- Logistic Regression, Gaussian Naive Bayes (GNB), Support vector, K Neighbors (KNN), Decision Tree (DT), Random Forest, Multi-Layer Perceptron, AdaBoost, Gradient Boosting, Light GBM and CatBoost classifiers. The findings will provide evidence-based recommendation for sustainable and energy-efficient ML algorithms. The experiment findings shows that GNB classifer consumes only 63 J/S energy, which is the lowest among all models whereas widely used KNN and DT classifiers consume 3 to 10 times more than the rest.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call