Abstract

Research quest to explore artificial neural network weights to understand the underlying patterns has been a point of interest since later 1990s. Recent advances in artificial neural networks particularly with deep neural networks provided an opportunity to explore and expose weights in the individual layers for discovery, extraction and transfer of knowledge. Several experiments with transfer of weights from a particular layer to other layers are conducted to analyse how the classification is affected by moving weights into particular layer in a particular position. This paper, for the first time tries to investigate the importance of individual layers in a neural network through systematic experimental evaluation. Experiments are carried out using feed-forward deep neural network with different topologies. Three data sets namely MNIST, IRIS and a synthetic hierarchical data set are used for the experiments. Knowledge extraction and transfer of knowledge (through weights) experiments are performed with multiple strategies. The results indicate that the middle layer weights possess some underlying representation and has highest impact on classification accuracy. When the middle layer weights are transferred to an untrained deep neural network, irrespective of topology or type of data set (from the three data sets used), there is a significant improvement in classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call