The study of artificial neural networks has originally been inspired by neurophysiology and cognitive science. It has resulted in a rich and diverse methodology and in numerous applications to machine intelligence, computer vision, pattern recognition and other applications. The random neural network (RNN) is a probabilistic model which was inspired by the spiking behaviour of neurons, and which has an elegant mathematical treatment that provides both its steady-state behaviour and offers efficient learning algorithms for recurrent networks. Second-order interactions, where more than one neuron jointly act upon other cells, have been observed in nature; they generalize the binary (excitatory–inhibitory) interaction between pairs of cells and give rise to synchronous firing (SF) by many cells. In this paper, we develop an extension of the RNN to the case of synchronous interactions, which are based on two cells that jointly excite a third cell; this local behaviour is in fact sufficient to create SF by large ensembles of cells. We describe the system state and derive its stationary solution as well as a O(N3) gradient descent learning algorithm for a recurrent network with N cells when both standard excitatory–inhibitory interactions, as well as SF, are present.