Abstract
This paper proposes a new information-theoretic method to interpret the main characteristics of neural networks after learning. The method does not try to interpret networks from one viewpoint among many, but the method considers all possible representations from the neural networks. The new method tries to compress multi-layered neural networks and to average all possible weights and activations, which is close to the ensemble method, popular in improving generalization performance. The method was applied preliminarily to the symmetric data set for showing the easily interpretable final results. The result shows that the method could improve generalization performance by using the information augmentation. In addition, compressed weights from multi-layered networks could contribute to improved generalization performance for two-layered networks without hidden layers. Then, the final collective weights showed perfectly the symmetric property of the data set. In addition, the connection weights of the two-layered networks showed also the symmetric property by using the collective or compressed weights as initial weights.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.