Abstract

Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is on how to include the TE in the learning mechanisms of a Convolutional Neural Network (CNN) architecture. We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections. Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed. On the flip side, it adds computational overhead to each epoch. According to our experiments on CNN classifiers, to achieve a reasonable computational overhead–accuracy trade-off, it is efficient to consider only the inter-neural information transfer of the neuron pairs between the last two fully connected layers. The TE acts as a smoothing factor, generating stability and becoming active only periodically, not after processing each input sample. Therefore, we can consider the TE is in our model a slowly changing meta-parameter.

Highlights

  • Sometimes, it is difficult to distinguish causality from statistical correlation

  • According to the authors of [4], causal information flow describes the causal structure of a system, whereas information transfer can be used to describe the emergent computation on that causal structure

  • Herzog et al [17] computed the feedforward Transfer Entropy (TE) between neurons to structure neural feedback connectivity. These feedback connections were used in the training algorithm of a Convolutional Neural Network (CNN) classifier

Read more

Summary

Introduction

It is difficult to distinguish causality from statistical correlation. A prerequisite of causality is the time lag between cause and effect: the cause precedes the effect [1,2]. Herzog et al [17] computed the feedforward TE between neurons to structure neural feedback connectivity. These feedback connections were used in the training algorithm of a Convolutional Neural Network (CNN) classifier. Herzog et al continued their research in [18] Their goal was to define clear guidelines about how to compute the TE based neural feedback connectivity to improve the overall classification performance of feedforward neural network classifiers. Inspired by Herzog et al.’s paper [17], we defined in [20] a novel information-theoretical approach for analyzing the information transfer (measured by TE) between the nodes of feedforward neural networks.

Transfer Entropy Notations
Computing the TE Feedback in a CNN
TE Feedback Integration in CNN Training
Experimental Results
Experimental results n
Conclusions and Open Problems

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.