Abstract

With rapid development of machine learning and subsequently deep learning, deep neural networks achieved remarkable results in solving various tasks. However, with increasing the accuracy of trained models, new architectures of neural networks present new challenges as they require significant amount of computing power for training and inference. This paper aims to review existing approaches to reducing computational power and training time of the neural network, evaluate and improve one of existing pruning methods for a face detection model. Obtained results show that the presented method can eliminate 69% of parameters while accuracy being declined only by 1.4%, which can be further improved to 0.7% by excluding context network modules from the pruning method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call