The binary perceptron is a fundamental model of supervised learning for nonconvex optimization, which is a root of the popular deep learning. The binary perceptron is able to achieve a classification of random high-dimensional data based on the marginal probabilities of binary synapses. The relationship between the belief propagation instability and the equilibrium analysis of the model remains elusive. Here, we establish the relationship by showing that the instability condition around the belief propagation fixed point is identical to the instability for breaking the replica symmetric saddle-point solution of the free-energy function. Therefore our analysis will hopefully provide insight towards other learning systems in bridging the gap between nonconvex learning dynamics and statistical mechanics properties of more complex neural networks.
Read full abstract