Abstract

Neural network models are based on a distributed computational scheme in which signals are propagated among neurons through weighted connections. The network topology defines the overall computation, which is local to each neuron but follows a precise flow driven by the neural network architecture, in the forward and error backpropagation phases. This chapter proposes a completely local alternative view on the neural network computational scheme, devised as the satisfaction of architectural constraints solved in the Lagrangian framework. The proposed local propagation algorithm casts learning in neural networks as the search for saddle points in the adjoint space composed of weights, neurons’ outputs, and Lagrange multipliers. In particular, the case of graph neural networks is considered, for which the computationally expensive iterative learning procedure can be avoided by joint optimization of the node states and transition functions, in which the state computation on the input graph is expressed by a constraint satisfaction mechanism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call