Learning, i.e. estimation of weights and biases in neural networks, involves the minimization of an output error criterion, a problem which is usually solved using back-propagation algorithms. This paper aims to assess the potential of simultaneous perturbation stochastic approximation (SPSA) algorithms to handle this minimization problem. In particular, a variation of the first-order SPSA algorithm that makes use of several numerical artifices including adaptive gain sequences, gradient smoothing and a step rejection procedure is developed. For illustration purposes, several application examples in the identification and control of nonlinear dynamic systems are presented. This numerical evaluation includes the development of neural network models as well as the design of a model-based predictive neural PID controller.