Abstract
Deep learning models have become increasingly complex, leading to high energy consumption during training and inference phases. This paper investigates the use of model pruning techniques to significantly reduce the size of deep learning models while preserving predictive accuracy. Our study focuses on optimizing the trade-off between model performance and energy consumption by systematically applying various pruning strategies—magnitude-based, structured, and random pruning—on popular neural network architectures. Using the CIFAR-10 dataset, we evaluate the models' energy consumption during training and inference phases. The results provide insights into achieving substantial energy savings with minimal loss of accuracy, contributing to the development of more sustainable and efficient AI systems.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have