Abstract

Pruning connections in a fully connected neural network allows to remove redundancy in the structure of the neural network and thus reduce the computational complexity of its implementation while maintaining the resulting characteristics of the classification of images entering its input. However, the issues of choosing the parameters of the pruning procedure have not been sufficiently studied at the moment. The choice essentially depends on the configuration of the neural network. However, in any neural network configuration there is one or more multilayer perceptrons. For them, it is possible to develop universal recommendations for choosing the parameters of the pruning procedure. One of the most promising methods for practical implementation is considered – the iterative pruning method, which uses preprocessing of input signals to regularize the learning process of a neural network. For a specific configuration of a multilayer perceptron and the MNIST (Modified National Institute of Standards and Technology) dataset, a database of handwritten digit samples proposed by the US National Institute of Standards and Technology as a standard when comparing image recognition methods, dependences of the classification accuracy of handwritten digits and learning rate were obtained on the learning step, pruning interval, and the number of links removed at each pruning iteration. It is shown that the best set of parameters of the learning procedure with pruning provides an increase in the quality of classification by about 1 %, compared with the worst set in the studied range. The convex nature of these dependencies allows a constructive approach to finding a neural network configuration that provides the highest classification accuracy with the minimum amount of computational costs during implementation.

Highlights

  • The use of deep neural networks is becoming more and more widespread in various practical applications, in particular, in image classification problems [1]

  • It is shown that the best set of parameters of the learning procedure with pruning provides an increase in the quality of classification by about 1 %, compared with the worst set in the studied range

  • Learning with pruning of a multilayer perceptron using the example of image classification from the MNIST set with a wide range of values for the parameters of the pruning procedure shows the possibility of increasing the classification quality by about 1 %

Read more

Summary

Introduction

The use of deep neural networks is becoming more and more widespread in various practical applications, in particular, in image classification problems [1]. Along with the development of convolutional neural networks, a large number of neural network architectures have appeared that provide the same classification quality as convolutional neural networks, but require less computation. Since the appearance of the first works on reducing redundancy in neural networks [13], a large number of various approaches have been developed [11] They do not cover the whole variety of architectures of modern neural networks. New practical applications are constantly emerging that require an increase in the quality of image classification, while limiting the used computing power.

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call