Abstract

The mutual information between the state of a neural network and the state of the external world represents the amount of information stored in the neural network that is associated with the external world. In contrast, the surprise of the sensory input indicates the unpredictability of the current input. In other words, this is a measure of inference ability, and an upper bound of the surprise is known as the variational free energy. According to the free-energy principle (FEP), a neural network continuously minimizes the free energy to perceive the external world. For the survival of animals, inference ability is considered to be more important than simply memorized information. In this study, the free energy is shown to represent the gap between the amount of information stored in the neural network and that available for inference. This concept involves both the FEP and the infomax principle, and will be a useful measure for quantifying the amount of information available for inference.

Highlights

  • Sensory perception comprises complex responses of the brain to sensory inputs

  • blind source separation (BSS) is shown to be a subset of the inference problem considered in the free-energy principle (FEP), and variational free energy is demonstrated to represent the difference between the information stored in the neural network and the information available for inferring current sensory inputs

  • If one has a statistical model determined by model structure m, the information calculated based on m is given by the negative log likelihood − log p( x|m), which is termed as the surprise of the sensory input and expresses the unpredictability of the sensory input for the individual

Read more

Summary

Introduction

Sensory perception comprises complex responses of the brain to sensory inputs. For example, the visual cortex can distinguish objects from their background [1], while the auditory cortex can recognize a certain sound in a noisy place with high sensitivity, a phenomenon known as the cocktail party effect [2,3,4,5,6,7]. The so-called internal model hypothesis [12,13,14,15,16,17,18,19], states that animals reconstruct a model of the external world in their brain through past experiences This internal model helps animals infer hidden causes and predict future inputs automatically; in other words, this inference process happens unconsciously. A mathematical foundation for unconscious inference, called the free-energy principle (FEP), has been proposed [13,14,15,16,17], and is a candidate unified theory of higher brain functions This principle hypothesizes that parameters of the generative model are learned through unsupervised learning, while hidden variables are inferred in the subsequent inference step. BSS is shown to be a subset of the inference problem considered in the FEP, and variational free energy is demonstrated to represent the difference between the information stored in the neural network (which is the measure of the infomax principle [29]) and the information available for inferring current sensory inputs

Definition of a System
Background noise x u
Information Stored in the Neural Network
Free-Energy Principle
Information Available for Inference
Comparison between the Free-Energy Principle and Related Theories
Infomax Principle
Principal Component Analysis
Independent Component Analysis
Simulation and Results
Findings
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.