Abstract

Deep Neural Networks has shown superior performance in various applications. But it is often seen as black box in real world applications, which is challenging to explain from the viewpoint of humans. It is important to understand the behavior of deep neural networks so as to trust the decision made and improve the classification accuracy of deep neural networks. In this study, the information theoretical analysis is used to investigate the behavior of layer-wise neurons in deep neural networks. The activation patterns of individual neurons in fully connected layers can provide insights for the performance of the neural network model. The behavior of neuron activation is investigated based on state-of-art classification network model. We study and compare the layer-wise pattern of neurons activation in fully connected layers given the same image input. Experiments are conducted on various data sets. We find that in a well trained classification model, the randomness level of the neurons activation pattern is reduced with the depth of the fully connected layers. This means that the neuron activation patterns of deep layers is more stable than that of shallow layers. The results in this study can also answer the question of how many layers are needed to avoid overfitting in deep neural networks. Corresponding experiments are conducted to validate the assumptions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.