Abstract

Deep learning machines are computational models composed of multiple processing layers of adaptive weights to learn representations of data with multiple levels of abstraction. Their structures mainly reflect the intuitive plausibility of decomposing a problem into multiple levels of computation and representation since it is believed that higher layers of representation allow a system to learn complex functions. Surprisingly, after decades of research, from learning and design perspectives these models are still deployed in a heuristic manner. In this paper, deep learning machines are modeled as disordered physical systems where its macroscopic behavior is determined in terms of the interactions defined between the basic information-processing constituent of these models, namely, the artificial neuron. They are viewed as the equilibrium states of a theoretical body that is subject to the law of increase of the entropy. The study of the changes in energy of the body when passing from one equilibrium state to another is used to understand the structure and role of the phase space of the system, and the resulting degree of disorder. It is shown that the topology of these models is strongly linked to their resulting level of disorder. Furthermore, the proposed theoretical characterization permit to assess the thermodynamic efficiency with which information can be processed by these models, and to provide a practical methodology to quantitatively estimate and compare their expected learning and generalization capabilities. These theoretical results provides new insights to the theory of deep learning and their implications are shown to be consistent through a set of benchmarks designed to experimentally assess their validity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call