We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator–prey models—also called Lotka–Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka–Volterra equations. Physics Letters A, 133(7–8), 378–382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka–Volterra equations. The resulting Lotka–Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka–Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka–Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.