Abstract

The geometric structure of an optimization landscape is argued to be fundamentally important to support the success of deep neural network learning. A direct computation of the landscape beyond two layers is hard. Therefore, to capture the global view of the landscape, an interpretable model of the network-parameter (or weight) space must be established. However, the model is lacking so far. Furthermore, it remains unknown what the landscape looks like for deep networks of binary synapses, which plays a key role in robust and energy efficient neuromorphic computation. Here, we propose a statistical mechanics framework by directly building a least structured model of the high-dimensional weight space, considering realistic structured data, stochastic gradient descent training, and the computational depth of neural networks. We also consider whether the number of network parameters outnumbers the number of supplied training data, namely, over- or under-parametrization. Our least structured model reveals that the weight spaces of the under-parametrization and over-parameterization cases belong to the same class, in the sense that these weight spaces are well-connected without any hierarchical clustering structure. In contrast, the shallow-network has a broken weight space, characterized by a discontinuous phase transition, thereby clarifying the benefit of depth in deep learning from the angle of high dimensional geometry. Our effective model also reveals that inside a deep network, there exists a liquid-like central part of the architecture in the sense that the weights in this part behave as randomly as possible, providing algorithmic implications. Our data-driven model thus provides a statistical mechanics insight about why deep learning is unreasonably effective in terms of the high-dimensional weight space, and how deep networks are different from shallow ones.

Highlights

  • Artificial deep neural networks have achieved state-of-theart performance in many industrial and academic domains ranging from pattern recognition and natural language processing [1] to many-body quantum physics and classical statistical physics [2]

  • Our paper establishes an interpretable model of the weight space focusing on the global view of the landscape, and this model has three important predictions: (i) The deeplearning weight-space is smooth and dominated by a single well-connected component, while the shallow network is characterized by a broken weight space. (ii) For the deep network, the under-parameterization and over-parameterization belong to the same universal class with a common smooth landscape, which exactly coincides with recent empirical findings of no substantial barriers between minima in the deep-learning loss landscape [6,7,8], we focus on the deep networks of binary weights

  • In order to explore the internal structure of the weight space of the deep learning, we introduce a distance-dependent term x i σi∗σi in the original Boltzmann distribution [Eq (4)] as follows [32]: P(σ ) = 1 exp Z

Read more

Summary

INTRODUCTION

Artificial deep neural networks have achieved state-of-theart performance in many industrial and academic domains ranging from pattern recognition and natural language processing [1] to many-body quantum physics and classical statistical physics [2]. This principle was previously used to analyze neural population at the collective network-activity level [31,32] This effective model of the practical deep learning is analyzed from an entropy landscape angle, in which the geometric structure of the entire weight space can be characterized in different contexts: overparametrization, under-parametrization and shallow networks. (iii) A most surprising prediction is that a special interior part of a largest entropy in the weight space emerges after learning, showing a liquid-like property, compared to two more constrained boundaries of the deep network. This part is conjectured to play a key role in fast learning dynamics

DEEP LEARNING SETTING
DATA-DRIVEN ISING MODEL
ENTROPY LANDSCAPE ANALYSIS
Under-parametrization scenario
Over-parametrization scenario
Shallow networks
Effects of node-permutation symmetry on the deep-learning landscape
Algorithmic implication of the effective model
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call