Abstract

Model compression is a required task when slow and large models are used, for example, for classification, but there are transmissions, space, time or computing capabilities constraints that have to be fulfilled. Multilayer Perceptron (MLP) models have been traditionally used as classifiers. Depending on the problem, they may need a large number of parameters (neuron functions, weights and bias) to obtain an acceptable performance. This work proposes a technique to compress an MLP model preserving, at the same time, its classification performance, through the kernels of a Volterra series model. The Volterra kernels can be used to represent the information that a Neural Network (NN) model has learnt with almost the same accuracy but compressed into less parameters. The Volterra-NN approach proposed in this work has two parts. First of all, it allows extracting the Volterra kernels from the NN parameters after training, which will contain the classifier knowledge. Second, it allows building different orders Volterra series model for the original problem using the Volterra kernels, significantly reducing the number of neural parameters involved to a very few Volterra-NN parameters (kernels). Experimental results are presented over the standard Iris classification problem, showing the good Volterra-NN model compression capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call